file_path
stringlengths 21
202
| content
stringlengths 13
1.02M
| size
int64 13
1.02M
| lang
stringclasses 9
values | avg_line_length
float64 5.43
98.5
| max_line_length
int64 12
993
| alphanum_fraction
float64 0.27
0.91
|
---|---|---|---|---|---|---|
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/utils/warp/ops.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Wrapping around warp kernels for compatibility with torch tensors."""
from __future__ import annotations
import numpy as np
import torch
import warp as wp
from . import kernels
def raycast_mesh(
ray_starts: torch.Tensor,
ray_directions: torch.Tensor,
mesh: wp.Mesh,
max_dist: float = 1e6,
return_distance: bool = False,
return_normal: bool = False,
return_face_id: bool = False,
) -> tuple[torch.Tensor, torch.Tensor | None, torch.Tensor | None, torch.Tensor | None]:
"""Performs ray-casting against a mesh.
Note that the `ray_starts` and `ray_directions`, and `ray_hits` should have compatible shapes
and data types to ensure proper execution. Additionally, they all must be in the same frame.
Args:
ray_starts: The starting position of the rays. Shape (N, 3).
ray_directions: The ray directions for each ray. Shape (N, 3).
mesh: The warp mesh to ray-cast against.
max_dist: The maximum distance to ray-cast. Defaults to 1e6.
return_distance: Whether to return the distance of the ray until it hits the mesh. Defaults to False.
return_normal: Whether to return the normal of the mesh face the ray hits. Defaults to False.
return_face_id: Whether to return the face id of the mesh face the ray hits. Defaults to False.
Returns:
The ray hit position. Shape (N, 3).
The returned tensor contains :obj:`float('inf')` for missed hits.
The ray hit distance. Shape (N,).
Will only return if :attr:`return_distance` is True, else returns None.
The returned tensor contains :obj:`float('inf')` for missed hits.
The ray hit normal. Shape (N, 3).
Will only return if :attr:`return_normal` is True else returns None.
The returned tensor contains :obj:`float('inf')` for missed hits.
The ray hit face id. Shape (N,).
Will only return if :attr:`return_face_id` is True else returns None.
The returned tensor contains :obj:`int(-1)` for missed hits.
"""
# extract device and shape information
shape = ray_starts.shape
device = ray_starts.device
# device of the mesh
torch_device = wp.device_to_torch(mesh.device)
# reshape the tensors
ray_starts = ray_starts.to(torch_device).view(-1, 3).contiguous()
ray_directions = ray_directions.to(torch_device).view(-1, 3).contiguous()
num_rays = ray_starts.shape[0]
# create output tensor for the ray hits
ray_hits = torch.full((num_rays, 3), float("inf"), device=torch_device).contiguous()
# map the memory to warp arrays
ray_starts_wp = wp.from_torch(ray_starts, dtype=wp.vec3)
ray_directions_wp = wp.from_torch(ray_directions, dtype=wp.vec3)
ray_hits_wp = wp.from_torch(ray_hits, dtype=wp.vec3)
if return_distance:
ray_distance = torch.full((num_rays,), float("inf"), device=torch_device).contiguous()
ray_distance_wp = wp.from_torch(ray_distance, dtype=wp.float32)
else:
ray_distance = None
ray_distance_wp = wp.empty((1,), dtype=wp.float32, device=torch_device)
if return_normal:
ray_normal = torch.full((num_rays, 3), float("inf"), device=torch_device).contiguous()
ray_normal_wp = wp.from_torch(ray_normal, dtype=wp.vec3)
else:
ray_normal = None
ray_normal_wp = wp.empty((1,), dtype=wp.vec3, device=torch_device)
if return_face_id:
ray_face_id = torch.ones((num_rays,), dtype=torch.int32, device=torch_device).contiguous() * (-1)
ray_face_id_wp = wp.from_torch(ray_face_id, dtype=wp.int32)
else:
ray_face_id = None
ray_face_id_wp = wp.empty((1,), dtype=wp.int32, device=torch_device)
# launch the warp kernel
wp.launch(
kernel=kernels.raycast_mesh_kernel,
dim=num_rays,
inputs=[
mesh.id,
ray_starts_wp,
ray_directions_wp,
ray_hits_wp,
ray_distance_wp,
ray_normal_wp,
ray_face_id_wp,
float(max_dist),
int(return_distance),
int(return_normal),
int(return_face_id),
],
device=mesh.device,
)
# NOTE: Synchronize is not needed anymore, but we keep it for now. Check with @dhoeller.
wp.synchronize()
if return_distance:
ray_distance = ray_distance.to(device).view(shape[0], shape[1])
if return_normal:
ray_normal = ray_normal.to(device).view(shape)
if return_face_id:
ray_face_id = ray_face_id.to(device).view(shape[0], shape[1])
return ray_hits.to(device).view(shape), ray_distance, ray_normal, ray_face_id
def convert_to_warp_mesh(points: np.ndarray, indices: np.ndarray, device: str) -> wp.Mesh:
"""Create a warp mesh object with a mesh defined from vertices and triangles.
Args:
points: The vertices of the mesh. Shape is (N, 3), where N is the number of vertices.
indices: The triangles of the mesh as references to vertices for each triangle.
Shape is (M, 3), where M is the number of triangles / faces.
device: The device to use for the mesh.
Returns:
The warp mesh object.
"""
return wp.Mesh(
points=wp.array(points.astype(np.float32), dtype=wp.vec3, device=device),
indices=wp.array(indices.astype(np.int32).flatten(), dtype=wp.int32, device=device),
)
| 5,554 | Python | 38.678571 | 109 | 0.64152 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/utils/noise/noise_model.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from . import noise_cfg
def constant_bias_noise(data: torch.Tensor, cfg: noise_cfg.ConstantBiasNoiseCfg) -> torch.Tensor:
"""Add a constant noise."""
return data + cfg.bias
def additive_uniform_noise(data: torch.Tensor, cfg: noise_cfg.UniformNoiseCfg) -> torch.Tensor:
"""Adds a noise sampled from a uniform distribution."""
return data + torch.rand_like(data) * (cfg.n_max - cfg.n_min) + cfg.n_min
def additive_gaussian_noise(data: torch.Tensor, cfg: noise_cfg.GaussianNoiseCfg) -> torch.Tensor:
"""Adds a noise sampled from a gaussian distribution."""
return data + cfg.mean + cfg.std * torch.randn_like(data)
| 870 | Python | 30.107142 | 97 | 0.717241 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/utils/noise/noise_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from collections.abc import Callable
from dataclasses import MISSING
from omni.isaac.orbit.utils import configclass
from . import noise_model
@configclass
class NoiseCfg:
"""Base configuration for a noise term."""
func: Callable[[torch.Tensor, NoiseCfg], torch.Tensor] = MISSING
"""The function to be called for applying the noise.
Note:
The shape of the input and output tensors must be the same.
"""
@configclass
class AdditiveUniformNoiseCfg(NoiseCfg):
"""Configuration for a additive uniform noise term."""
func = noise_model.additive_uniform_noise
n_min: float = -1.0
"""The minimum value of the noise. Defaults to -1.0."""
n_max: float = 1.0
"""The maximum value of the noise. Defaults to 1.0."""
@configclass
class AdditiveGaussianNoiseCfg(NoiseCfg):
"""Configuration for a additive gaussian noise term."""
func = noise_model.additive_gaussian_noise
mean: float = 0.0
"""The mean of the noise. Defaults to 0.0."""
std: float = 1.0
"""The standard deviation of the noise. Defaults to 1.0."""
@configclass
class ConstantBiasNoiseCfg(NoiseCfg):
"""Configuration for a constant bias noise term."""
func = noise_model.constant_bias_noise
bias: float = 0.0
"""The bias to add. Defaults to 0.0."""
| 1,480 | Python | 23.278688 | 68 | 0.693243 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/utils/noise/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Sub-module containing different noise models implementations.
The noise models are implemented as functions that take in a tensor and a configuration and return a tensor
with the noise applied. These functions are then used in the :class:`NoiseCfg` configuration class.
Usage:
.. code-block:: python
import torch
from omni.isaac.orbit.utils.noise import AdditiveGaussianNoiseCfg
# create a random tensor
my_tensor = torch.rand(128, 128, device="cuda")
# create a noise configuration
cfg = AdditiveGaussianNoiseCfg(mean=0.0, std=1.0)
# apply the noise
my_noisified_tensor = cfg.func(my_tensor, cfg)
"""
from .noise_cfg import NoiseCfg # noqa: F401
from .noise_cfg import AdditiveGaussianNoiseCfg, AdditiveUniformNoiseCfg, ConstantBiasNoiseCfg
| 909 | Python | 29.333332 | 107 | 0.753575 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/asset_base_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
from dataclasses import MISSING
from typing import Literal
from omni.isaac.orbit.sim import SpawnerCfg
from omni.isaac.orbit.utils import configclass
from .asset_base import AssetBase
@configclass
class AssetBaseCfg:
"""The base configuration class for an asset's parameters.
Please see the :class:`AssetBase` class for more information on the asset class.
"""
@configclass
class InitialStateCfg:
"""Initial state of the asset.
This defines the default initial state of the asset when it is spawned into the simulation, as
well as the default state when the simulation is reset.
After parsing the initial state, the asset class stores this information in the :attr:`data`
attribute of the asset class. This can then be accessed by the user to modify the state of the asset
during the simulation, for example, at resets.
"""
# root position
pos: tuple[float, float, float] = (0.0, 0.0, 0.0)
"""Position of the root in simulation world frame. Defaults to (0.0, 0.0, 0.0)."""
rot: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.0)
"""Quaternion rotation (w, x, y, z) of the root in simulation world frame.
Defaults to (1.0, 0.0, 0.0, 0.0).
"""
class_type: type[AssetBase] = MISSING
"""The associated asset class.
The class should inherit from :class:`omni.isaac.orbit.assets.asset_base.AssetBase`.
"""
prim_path: str = MISSING
"""Prim path (or expression) to the asset.
.. note::
The expression can contain the environment namespace regex ``{ENV_REGEX_NS}`` which
will be replaced with the environment namespace.
Example: ``{ENV_REGEX_NS}/Robot`` will be replaced with ``/World/envs/env_.*/Robot``.
"""
spawn: SpawnerCfg | None = None
"""Spawn configuration for the asset. Defaults to None.
If None, then no prims are spawned by the asset class. Instead, it is assumed that the
asset is already present in the scene.
"""
init_state: InitialStateCfg = InitialStateCfg()
"""Initial state of the rigid object. Defaults to identity pose."""
collision_group: Literal[0, -1] = 0
"""Collision group of the asset. Defaults to ``0``.
* ``-1``: global collision group (collides with all assets in the scene).
* ``0``: local collision group (collides with other assets in the same environment).
"""
debug_vis: bool = False
"""Whether to enable debug visualization for the asset. Defaults to ``False``."""
| 2,721 | Python | 33.455696 | 108 | 0.668872 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Sub-package for different assets, such as rigid objects and articulations.
An asset is a physical object that can be spawned in the simulation. The class handles both
the spawning of the asset into the USD stage as well as initialization of necessary physics
handles to interact with the asset.
Upon construction of the asset instance, the prim corresponding to the asset is spawned into the
USD stage if the spawn configuration is not None. The spawn configuration is defined in the
:attr:`AssetBaseCfg.spawn` attribute. In case the configured :attr:`AssetBaseCfg.prim_path` is
an expression, then the prim is spawned at all the matching paths. Otherwise, a single prim is
spawned at the configured path. For more information on the spawn configuration, see the
:mod:`omni.isaac.orbit.sim.spawners` module.
The asset class also registers callbacks for the stage play/stop events. These are used to
construct the physics handles for the asset as the physics engine is only available when the
stage is playing. Additionally, the class registers a callback for debug visualization of the
asset. This can be enabled by setting the :attr:`AssetBaseCfg.debug_vis` attribute to True.
The asset class follows the following naming convention for its methods:
* **set_xxx()**: These are used to only set the buffers into the :attr:`data` instance. However, they
do not write the data into the simulator. The writing of data only happens when the
:meth:`write_data_to_sim` method is called.
* **write_xxx_to_sim()**: These are used to set the buffers into the :attr:`data` instance and write
the corresponding data into the simulator as well.
* **update(dt)**: These are used to update the buffers in the :attr:`data` instance. This should
be called after a simulation step is performed.
The main reason to separate the ``set`` and ``write`` operations is to provide flexibility to the
user when they need to perform a post-processing operation of the buffers before applying them
into the simulator. A common example for this is dealing with explicit actuator models where the
specified joint targets are not directly applied to the simulator but are instead used to compute
the corresponding actuator torques.
"""
from .articulation import Articulation, ArticulationCfg, ArticulationData
from .asset_base import AssetBase
from .asset_base_cfg import AssetBaseCfg
from .rigid_object import RigidObject, RigidObjectCfg, RigidObjectData
| 2,567 | Python | 56.066665 | 101 | 0.791196 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/asset_base.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import inspect
import re
import weakref
from abc import ABC, abstractmethod
from collections.abc import Sequence
from typing import TYPE_CHECKING, Any
import omni.kit.app
import omni.timeline
import omni.isaac.orbit.sim as sim_utils
if TYPE_CHECKING:
from .asset_base_cfg import AssetBaseCfg
class AssetBase(ABC):
"""The base interface class for assets.
An asset corresponds to any physics-enabled object that can be spawned in the simulation. These include
rigid objects, articulated objects, deformable objects etc. The core functionality of an asset is to
provide a set of buffers that can be used to interact with the simulator. The buffers are updated
by the asset class and can be written into the simulator using the their respective ``write`` methods.
This allows a convenient way to perform post-processing operations on the buffers before writing them
into the simulator and obtaining the corresponding simulation results.
The class handles both the spawning of the asset into the USD stage as well as initialization of necessary
physics handles to interact with the asset. Upon construction of the asset instance, the prim corresponding
to the asset is spawned into the USD stage if the spawn configuration is not None. The spawn configuration
is defined in the :attr:`AssetBaseCfg.spawn` attribute. In case the configured :attr:`AssetBaseCfg.prim_path`
is an expression, then the prim is spawned at all the matching paths. Otherwise, a single prim is spawned
at the configured path. For more information on the spawn configuration, see the
:mod:`omni.isaac.orbit.sim.spawners` module.
Unlike Isaac Sim interface, where one usually needs to call the
:meth:`omni.isaac.core.prims.XFormPrimView.initialize` method to initialize the PhysX handles, the asset
class automatically initializes and invalidates the PhysX handles when the stage is played/stopped. This
is done by registering callbacks for the stage play/stop events.
Additionally, the class registers a callback for debug visualization of the asset if a debug visualization
is implemented in the asset class. This can be enabled by setting the :attr:`AssetBaseCfg.debug_vis` attribute
to True. The debug visualization is implemented through the :meth:`_set_debug_vis_impl` and
:meth:`_debug_vis_callback` methods.
"""
def __init__(self, cfg: AssetBaseCfg):
"""Initialize the asset base.
Args:
cfg: The configuration class for the asset.
Raises:
RuntimeError: If no prims found at input prim path or prim path expression.
"""
# store inputs
self.cfg = cfg
# flag for whether the asset is initialized
self._is_initialized = False
# check if base asset path is valid
# note: currently the spawner does not work if there is a regex pattern in the leaf
# For example, if the prim path is "/World/Robot_[1,2]" since the spawner will not
# know which prim to spawn. This is a limitation of the spawner and not the asset.
asset_path = self.cfg.prim_path.split("/")[-1]
asset_path_is_regex = re.match(r"^[a-zA-Z0-9/_]+$", asset_path) is None
# spawn the asset
if self.cfg.spawn is not None and not asset_path_is_regex:
self.cfg.spawn.func(
self.cfg.prim_path,
self.cfg.spawn,
translation=self.cfg.init_state.pos,
orientation=self.cfg.init_state.rot,
)
# check that spawn was successful
matching_prims = sim_utils.find_matching_prims(self.cfg.prim_path)
if len(matching_prims) == 0:
raise RuntimeError(f"Could not find prim with path {self.cfg.prim_path}.")
# note: Use weakref on all callbacks to ensure that this object can be deleted when its destructor is called.
# add callbacks for stage play/stop
# The order is set to 10 which is arbitrary but should be lower priority than the default order of 0
timeline_event_stream = omni.timeline.get_timeline_interface().get_timeline_event_stream()
self._initialize_handle = timeline_event_stream.create_subscription_to_pop_by_type(
int(omni.timeline.TimelineEventType.PLAY),
lambda event, obj=weakref.proxy(self): obj._initialize_callback(event),
order=10,
)
self._invalidate_initialize_handle = timeline_event_stream.create_subscription_to_pop_by_type(
int(omni.timeline.TimelineEventType.STOP),
lambda event, obj=weakref.proxy(self): obj._invalidate_initialize_callback(event),
order=10,
)
# add handle for debug visualization (this is set to a valid handle inside set_debug_vis)
self._debug_vis_handle = None
# set initial state of debug visualization
self.set_debug_vis(self.cfg.debug_vis)
def __del__(self):
"""Unsubscribe from the callbacks."""
# clear physics events handles
if self._initialize_handle:
self._initialize_handle.unsubscribe()
self._initialize_handle = None
if self._invalidate_initialize_handle:
self._invalidate_initialize_handle.unsubscribe()
self._invalidate_initialize_handle = None
# clear debug visualization
if self._debug_vis_handle:
self._debug_vis_handle.unsubscribe()
self._debug_vis_handle = None
"""
Properties
"""
@property
@abstractmethod
def num_instances(self) -> int:
"""Number of instances of the asset.
This is equal to the number of asset instances per environment multiplied by the number of environments.
"""
return NotImplementedError
@property
def device(self) -> str:
"""Memory device for computation."""
return self._device
@property
@abstractmethod
def data(self) -> Any:
"""Data related to the asset."""
return NotImplementedError
@property
def has_debug_vis_implementation(self) -> bool:
"""Whether the asset has a debug visualization implemented."""
# check if function raises NotImplementedError
source_code = inspect.getsource(self._set_debug_vis_impl)
return "NotImplementedError" not in source_code
"""
Operations.
"""
def set_debug_vis(self, debug_vis: bool) -> bool:
"""Sets whether to visualize the asset data.
Args:
debug_vis: Whether to visualize the asset data.
Returns:
Whether the debug visualization was successfully set. False if the asset
does not support debug visualization.
"""
# check if debug visualization is supported
if not self.has_debug_vis_implementation:
return False
# toggle debug visualization objects
self._set_debug_vis_impl(debug_vis)
# toggle debug visualization handles
if debug_vis:
# create a subscriber for the post update event if it doesn't exist
if self._debug_vis_handle is None:
app_interface = omni.kit.app.get_app_interface()
self._debug_vis_handle = app_interface.get_post_update_event_stream().create_subscription_to_pop(
lambda event, obj=weakref.proxy(self): obj._debug_vis_callback(event)
)
else:
# remove the subscriber if it exists
if self._debug_vis_handle is not None:
self._debug_vis_handle.unsubscribe()
self._debug_vis_handle = None
# return success
return True
@abstractmethod
def reset(self, env_ids: Sequence[int] | None = None):
"""Resets all internal buffers of selected environments.
Args:
env_ids: The indices of the object to reset. Defaults to None (all instances).
"""
raise NotImplementedError
@abstractmethod
def write_data_to_sim(self):
"""Writes data to the simulator."""
raise NotImplementedError
@abstractmethod
def update(self, dt: float):
"""Update the internal buffers.
The time step ``dt`` is used to compute numerical derivatives of quantities such as joint
accelerations which are not provided by the simulator.
Args:
dt: The amount of time passed from last ``update`` call.
"""
raise NotImplementedError
"""
Implementation specific.
"""
@abstractmethod
def _initialize_impl(self):
"""Initializes the PhysX handles and internal buffers."""
raise NotImplementedError
def _set_debug_vis_impl(self, debug_vis: bool):
"""Set debug visualization into visualization objects.
This function is responsible for creating the visualization objects if they don't exist
and input ``debug_vis`` is True. If the visualization objects exist, the function should
set their visibility into the stage.
"""
raise NotImplementedError(f"Debug visualization is not implemented for {self.__class__.__name__}.")
def _debug_vis_callback(self, event):
"""Callback for debug visualization.
This function calls the visualization objects and sets the data to visualize into them.
"""
raise NotImplementedError(f"Debug visualization is not implemented for {self.__class__.__name__}.")
"""
Internal simulation callbacks.
"""
def _initialize_callback(self, event):
"""Initializes the scene elements.
Note:
PhysX handles are only enabled once the simulator starts playing. Hence, this function needs to be
called whenever the simulator "plays" from a "stop" state.
"""
if not self._is_initialized:
# obtain simulation related information
sim = sim_utils.SimulationContext.instance()
if sim is None:
raise RuntimeError("SimulationContext is not initialized! Please initialize SimulationContext first.")
self._backend = sim.backend
self._device = sim.device
# initialize the asset
self._initialize_impl()
# set flag
self._is_initialized = True
def _invalidate_initialize_callback(self, event):
"""Invalidates the scene elements."""
self._is_initialized = False
| 10,684 | Python | 39.782443 | 118 | 0.659023 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/rigid_object/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Sub-module for rigid object assets."""
from .rigid_object import RigidObject
from .rigid_object_cfg import RigidObjectCfg
from .rigid_object_data import RigidObjectData
| 296 | Python | 25.999998 | 56 | 0.777027 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/rigid_object/rigid_object_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
from omni.isaac.orbit.utils import configclass
from ..asset_base_cfg import AssetBaseCfg
from .rigid_object import RigidObject
@configclass
class RigidObjectCfg(AssetBaseCfg):
"""Configuration parameters for a rigid object."""
@configclass
class InitialStateCfg(AssetBaseCfg.InitialStateCfg):
"""Initial state of the rigid body."""
lin_vel: tuple[float, float, float] = (0.0, 0.0, 0.0)
"""Linear velocity of the root in simulation world frame. Defaults to (0.0, 0.0, 0.0)."""
ang_vel: tuple[float, float, float] = (0.0, 0.0, 0.0)
"""Angular velocity of the root in simulation world frame. Defaults to (0.0, 0.0, 0.0)."""
##
# Initialize configurations.
##
class_type: type = RigidObject
init_state: InitialStateCfg = InitialStateCfg()
"""Initial state of the rigid object. Defaults to identity pose with zero velocity."""
| 1,065 | Python | 29.457142 | 98 | 0.683568 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/rigid_object/rigid_object_data.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
from dataclasses import dataclass
@dataclass
class RigidObjectData:
"""Data container for a rigid object."""
##
# Properties.
##
body_names: list[str] = None
"""Body names in the order parsed by the simulation view."""
##
# Default states.
##
default_root_state: torch.Tensor = None
"""Default root state ``[pos, quat, lin_vel, ang_vel]`` in local environment frame. Shape is (num_instances, 13)."""
##
# Frame states.
##
root_state_w: torch.Tensor = None
"""Root state ``[pos, quat, lin_vel, ang_vel]`` in simulation world frame. Shape is (num_instances, 13)."""
root_vel_b: torch.Tensor = None
"""Root velocity `[lin_vel, ang_vel]` in base frame. Shape is (num_instances, 6)."""
projected_gravity_b: torch.Tensor = None
"""Projection of the gravity direction on base frame. Shape is (num_instances, 3)."""
heading_w: torch.Tensor = None
"""Yaw heading of the base frame (in radians). Shape is (num_instances,).
Note:
This quantity is computed by assuming that the forward-direction of the base
frame is along x-direction, i.e. :math:`(1, 0, 0)`.
"""
body_state_w: torch.Tensor = None
"""State of all bodies `[pos, quat, lin_vel, ang_vel]` in simulation world frame.
Shape is (num_instances, num_bodies, 13)."""
body_acc_w: torch.Tensor = None
"""Acceleration of all bodies. Shape is (num_instances, num_bodies, 6).
Note:
This quantity is computed based on the rigid body state from the last step.
"""
"""
Properties
"""
@property
def root_pos_w(self) -> torch.Tensor:
"""Root position in simulation world frame. Shape is (num_instances, 3)."""
return self.root_state_w[:, :3]
@property
def root_quat_w(self) -> torch.Tensor:
"""Root orientation (w, x, y, z) in simulation world frame. Shape is (num_instances, 4)."""
return self.root_state_w[:, 3:7]
@property
def root_vel_w(self) -> torch.Tensor:
"""Root velocity in simulation world frame. Shape is (num_instances, 6)."""
return self.root_state_w[:, 7:13]
@property
def root_lin_vel_w(self) -> torch.Tensor:
"""Root linear velocity in simulation world frame. Shape is (num_instances, 3)."""
return self.root_state_w[:, 7:10]
@property
def root_ang_vel_w(self) -> torch.Tensor:
"""Root angular velocity in simulation world frame. Shape is (num_instances, 3)."""
return self.root_state_w[:, 10:13]
@property
def root_lin_vel_b(self) -> torch.Tensor:
"""Root linear velocity in base frame. Shape is (num_instances, 3)."""
return self.root_vel_b[:, 0:3]
@property
def root_ang_vel_b(self) -> torch.Tensor:
"""Root angular velocity in base world frame. Shape is (num_instances, 3)."""
return self.root_vel_b[:, 3:6]
@property
def body_pos_w(self) -> torch.Tensor:
"""Positions of all bodies in simulation world frame. Shape is (num_instances, num_bodies, 3)."""
return self.body_state_w[..., :3]
@property
def body_quat_w(self) -> torch.Tensor:
"""Orientation (w, x, y, z) of all bodies in simulation world frame. Shape is (num_instances, num_bodies, 4)."""
return self.body_state_w[..., 3:7]
@property
def body_vel_w(self) -> torch.Tensor:
"""Velocity of all bodies in simulation world frame. Shape is (num_instances, num_bodies, 6)."""
return self.body_state_w[..., 7:13]
@property
def body_lin_vel_w(self) -> torch.Tensor:
"""Linear velocity of all bodies in simulation world frame. Shape is (num_instances, num_bodies, 3)."""
return self.body_state_w[..., 7:10]
@property
def body_ang_vel_w(self) -> torch.Tensor:
"""Angular velocity of all bodies in simulation world frame. Shape is (num_instances, num_bodies, 3)."""
return self.body_state_w[..., 10:13]
@property
def body_lin_acc_w(self) -> torch.Tensor:
"""Linear acceleration of all bodies in simulation world frame. Shape is (num_instances, num_bodies, 3)."""
return self.body_acc_w[..., 0:3]
@property
def body_ang_acc_w(self) -> torch.Tensor:
"""Angular acceleration of all bodies in simulation world frame. Shape is (num_instances, num_bodies, 3)."""
return self.body_acc_w[..., 3:6]
| 4,594 | Python | 33.037037 | 120 | 0.622987 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/rigid_object/rigid_object.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import torch
import warnings
from collections.abc import Sequence
from typing import TYPE_CHECKING
import carb
import omni.physics.tensors.impl.api as physx
from pxr import UsdPhysics
import omni.isaac.orbit.sim as sim_utils
import omni.isaac.orbit.utils.math as math_utils
import omni.isaac.orbit.utils.string as string_utils
from ..asset_base import AssetBase
from .rigid_object_data import RigidObjectData
if TYPE_CHECKING:
from .rigid_object_cfg import RigidObjectCfg
class RigidObject(AssetBase):
"""A rigid object asset class.
Rigid objects are assets comprising of rigid bodies. They can be used to represent dynamic objects
such as boxes, spheres, etc. A rigid body is described by its pose, velocity and mass distribution.
For an asset to be considered a rigid object, the root prim of the asset must have the `USD RigidBodyAPI`_
applied to it. This API is used to define the simulation properties of the rigid body. On playing the
simulation, the physics engine will automatically register the rigid body and create a corresponding
rigid body handle. This handle can be accessed using the :attr:`root_physx_view` attribute.
.. note::
For users familiar with Isaac Sim, the PhysX view class API is not the exactly same as Isaac Sim view
class API. Similar to Orbit, Isaac Sim wraps around the PhysX view API. However, as of now (2023.1 release),
we see a large difference in initializing the view classes in Isaac Sim. This is because the view classes
in Isaac Sim perform additional USD-related operations which are slow and also not required.
.. _`USD RigidBodyAPI`: https://openusd.org/dev/api/class_usd_physics_rigid_body_a_p_i.html
"""
cfg: RigidObjectCfg
"""Configuration instance for the rigid object."""
def __init__(self, cfg: RigidObjectCfg):
"""Initialize the rigid object.
Args:
cfg: A configuration instance.
"""
super().__init__(cfg)
# container for data access
self._data = RigidObjectData()
"""
Properties
"""
@property
def data(self) -> RigidObjectData:
return self._data
@property
def num_instances(self) -> int:
return self.root_physx_view.count
@property
def num_bodies(self) -> int:
"""Number of bodies in the asset."""
return 1
@property
def body_names(self) -> list[str]:
"""Ordered names of bodies in articulation."""
prim_paths = self.root_physx_view.prim_paths[: self.num_bodies]
return [path.split("/")[-1] for path in prim_paths]
@property
def root_physx_view(self) -> physx.RigidBodyView:
"""Rigid body view for the asset (PhysX).
Note:
Use this view with caution. It requires handling of tensors in a specific way.
"""
return self._root_physx_view
@property
def body_physx_view(self) -> physx.RigidBodyView:
"""Rigid body view for the asset (PhysX).
.. deprecated:: v0.3.0
The attribute 'body_physx_view' will be removed in v0.4.0. Please use :attr:`root_physx_view` instead.
"""
dep_msg = "The attribute 'body_physx_view' will be removed in v0.4.0. Please use 'root_physx_view' instead."
warnings.warn(dep_msg, DeprecationWarning)
carb.log_error(dep_msg)
return self.root_physx_view
"""
Operations.
"""
def reset(self, env_ids: Sequence[int] | None = None):
# resolve all indices
if env_ids is None:
env_ids = slice(None)
# reset external wrench
self._external_force_b[env_ids] = 0.0
self._external_torque_b[env_ids] = 0.0
# reset last body vel
self._last_body_vel_w[env_ids] = 0.0
def write_data_to_sim(self):
"""Write external wrench to the simulation.
Note:
We write external wrench to the simulation here since this function is called before the simulation step.
This ensures that the external wrench is applied at every simulation step.
"""
# write external wrench
if self.has_external_wrench:
self.root_physx_view.apply_forces_and_torques_at_position(
force_data=self._external_force_b.view(-1, 3),
torque_data=self._external_torque_b.view(-1, 3),
position_data=None,
indices=self._ALL_BODY_INDICES,
is_global=False,
)
def update(self, dt: float):
# -- root-state (note: we roll the quaternion to match the convention used in Isaac Sim -- wxyz)
self._data.root_state_w[:, :7] = self.root_physx_view.get_transforms()
self._data.root_state_w[:, 3:7] = math_utils.convert_quat(self._data.root_state_w[:, 3:7], to="wxyz")
self._data.root_state_w[:, 7:] = self.root_physx_view.get_velocities()
# -- body-state (note: for rigid objects, we only have one body so we just copy the root state)
self._data.body_state_w[:] = self._data.root_state_w.view(-1, self.num_bodies, 13)
# -- update common data
self._update_common_data(dt)
def find_bodies(self, name_keys: str | Sequence[str], preserve_order: bool = False) -> tuple[list[int], list[str]]:
"""Find bodies in the articulation based on the name keys.
Please check the :meth:`omni.isaac.orbit.utils.string_utils.resolve_matching_names` function for more
information on the name matching.
Args:
name_keys: A regular expression or a list of regular expressions to match the body names.
preserve_order: Whether to preserve the order of the name keys in the output. Defaults to False.
Returns:
A tuple of lists containing the body indices and names.
"""
return string_utils.resolve_matching_names(name_keys, self.body_names, preserve_order)
"""
Operations - Write to simulation.
"""
def write_root_state_to_sim(self, root_state: torch.Tensor, env_ids: Sequence[int] | None = None):
"""Set the root state over selected environment indices into the simulation.
The root state comprises of the cartesian position, quaternion orientation in (w, x, y, z), and linear
and angular velocity. All the quantities are in the simulation frame.
Args:
root_state: Root state in simulation frame. Shape is (len(env_ids), 13).
env_ids: Environment indices. If None, then all indices are used.
"""
# set into simulation
self.write_root_pose_to_sim(root_state[:, :7], env_ids=env_ids)
self.write_root_velocity_to_sim(root_state[:, 7:], env_ids=env_ids)
def write_root_pose_to_sim(self, root_pose: torch.Tensor, env_ids: Sequence[int] | None = None):
"""Set the root pose over selected environment indices into the simulation.
The root pose comprises of the cartesian position and quaternion orientation in (w, x, y, z).
Args:
root_pose: Root poses in simulation frame. Shape is (len(env_ids), 7).
env_ids: Environment indices. If None, then all indices are used.
"""
# resolve all indices
physx_env_ids = env_ids
if env_ids is None:
env_ids = slice(None)
physx_env_ids = self._ALL_INDICES
# note: we need to do this here since tensors are not set into simulation until step.
# set into internal buffers
self._data.root_state_w[env_ids, :7] = root_pose.clone()
# convert root quaternion from wxyz to xyzw
root_poses_xyzw = self._data.root_state_w[:, :7].clone()
root_poses_xyzw[:, 3:] = math_utils.convert_quat(root_poses_xyzw[:, 3:], to="xyzw")
# set into simulation
self.root_physx_view.set_transforms(root_poses_xyzw, indices=physx_env_ids)
def write_root_velocity_to_sim(self, root_velocity: torch.Tensor, env_ids: Sequence[int] | None = None):
"""Set the root velocity over selected environment indices into the simulation.
Args:
root_velocity: Root velocities in simulation frame. Shape is (len(env_ids), 6).
env_ids: Environment indices. If None, then all indices are used.
"""
# resolve all indices
physx_env_ids = env_ids
if env_ids is None:
env_ids = slice(None)
physx_env_ids = self._ALL_INDICES
# note: we need to do this here since tensors are not set into simulation until step.
# set into internal buffers
self._data.root_state_w[env_ids, 7:] = root_velocity.clone()
# set into simulation
self.root_physx_view.set_velocities(self._data.root_state_w[:, 7:], indices=physx_env_ids)
"""
Operations - Setters.
"""
def set_external_force_and_torque(
self,
forces: torch.Tensor,
torques: torch.Tensor,
body_ids: Sequence[int] | slice | None = None,
env_ids: Sequence[int] | None = None,
):
"""Set external force and torque to apply on the asset's bodies in their local frame.
For many applications, we want to keep the applied external force on rigid bodies constant over a period of
time (for instance, during the policy control). This function allows us to store the external force and torque
into buffers which are then applied to the simulation at every step.
.. caution::
If the function is called with empty forces and torques, then this function disables the application
of external wrench to the simulation.
.. code-block:: python
# example of disabling external wrench
asset.set_external_force_and_torque(forces=torch.zeros(0, 3), torques=torch.zeros(0, 3))
.. note::
This function does not apply the external wrench to the simulation. It only fills the buffers with
the desired values. To apply the external wrench, call the :meth:`write_data_to_sim` function
right before the simulation step.
Args:
forces: External forces in bodies' local frame. Shape is (len(env_ids), len(body_ids), 3).
torques: External torques in bodies' local frame. Shape is (len(env_ids), len(body_ids), 3).
body_ids: Body indices to apply external wrench to. Defaults to None (all bodies).
env_ids: Environment indices to apply external wrench to. Defaults to None (all instances).
"""
if forces.any() or torques.any():
self.has_external_wrench = True
# resolve all indices
# -- env_ids
if env_ids is None:
env_ids = self._ALL_INDICES
elif not isinstance(env_ids, torch.Tensor):
env_ids = torch.tensor(env_ids, dtype=torch.long, device=self.device)
# -- body_ids
if body_ids is None:
body_ids = torch.arange(self.num_bodies, dtype=torch.long, device=self.device)
elif isinstance(body_ids, slice):
body_ids = torch.arange(self.num_bodies, dtype=torch.long, device=self.device)[body_ids]
elif not isinstance(body_ids, torch.Tensor):
body_ids = torch.tensor(body_ids, dtype=torch.long, device=self.device)
# note: we need to do this complicated indexing since torch doesn't support multi-indexing
# create global body indices from env_ids and env_body_ids
# (env_id * total_bodies_per_env) + body_id
indices = body_ids.repeat(len(env_ids), 1) + env_ids.unsqueeze(1) * self.num_bodies
indices = indices.view(-1)
# set into internal buffers
# note: these are applied in the write_to_sim function
self._external_force_b.flatten(0, 1)[indices] = forces.flatten(0, 1)
self._external_torque_b.flatten(0, 1)[indices] = torques.flatten(0, 1)
else:
self.has_external_wrench = False
"""
Internal helper.
"""
def _initialize_impl(self):
# create simulation view
self._physics_sim_view = physx.create_simulation_view(self._backend)
self._physics_sim_view.set_subspace_roots("/")
# obtain the first prim in the regex expression (all others are assumed to be a copy of this)
template_prim = sim_utils.find_first_matching_prim(self.cfg.prim_path)
if template_prim is None:
raise RuntimeError(f"Failed to find prim for expression: '{self.cfg.prim_path}'.")
template_prim_path = template_prim.GetPath().pathString
# find rigid root prims
root_prims = sim_utils.get_all_matching_child_prims(
template_prim_path, predicate=lambda prim: prim.HasAPI(UsdPhysics.RigidBodyAPI)
)
if len(root_prims) != 1:
raise RuntimeError(
f"Failed to find a single rigid body when resolving '{self.cfg.prim_path}'."
f" Found multiple '{root_prims}' under '{template_prim_path}'."
)
# resolve root prim back into regex expression
root_prim_path = root_prims[0].GetPath().pathString
root_prim_path_expr = self.cfg.prim_path + root_prim_path[len(template_prim_path) :]
# -- object view
self._root_physx_view = self._physics_sim_view.create_rigid_body_view(root_prim_path_expr.replace(".*", "*"))
# log information about the articulation
carb.log_info(f"Rigid body initialized at: {self.cfg.prim_path} with root '{root_prim_path_expr}'.")
carb.log_info(f"Number of instances: {self.num_instances}")
carb.log_info(f"Number of bodies: {self.num_bodies}")
carb.log_info(f"Body names: {self.body_names}")
# create buffers
self._create_buffers()
# process configuration
self._process_cfg()
def _create_buffers(self):
"""Create buffers for storing data."""
# constants
self._ALL_INDICES = torch.arange(self.num_instances, dtype=torch.long, device=self.device)
self._ALL_BODY_INDICES = torch.arange(
self.root_physx_view.count * self.num_bodies, dtype=torch.long, device=self.device
)
self.GRAVITY_VEC_W = torch.tensor((0.0, 0.0, -1.0), device=self.device).repeat(self.num_instances, 1)
self.FORWARD_VEC_B = torch.tensor((1.0, 0.0, 0.0), device=self.device).repeat(self.num_instances, 1)
# external forces and torques
self.has_external_wrench = False
self._external_force_b = torch.zeros((self.num_instances, self.num_bodies, 3), device=self.device)
self._external_torque_b = torch.zeros_like(self._external_force_b)
# asset data
# -- properties
self._data.body_names = self.body_names
# -- root states
self._data.root_state_w = torch.zeros(self.num_instances, 13, device=self.device)
self._data.root_state_w[:, 3] = 1.0 # set default quaternion to (1, 0, 0, 0)
self._data.default_root_state = torch.zeros_like(self._data.root_state_w)
self._data.default_root_state[:, 3] = 1.0 # set default quaternion to (1, 0, 0, 0)
# -- body states
self._data.body_state_w = torch.zeros(self.num_instances, self.num_bodies, 13, device=self.device)
self._data.body_state_w[:, :, 3] = 1.0 # set default quaternion to (1, 0, 0, 0)
# -- post-computed
self._data.root_vel_b = torch.zeros(self.num_instances, 6, device=self.device)
self._data.projected_gravity_b = torch.zeros(self.num_instances, 3, device=self.device)
self._data.heading_w = torch.zeros(self.num_instances, device=self.device)
self._data.body_acc_w = torch.zeros(self.num_instances, self.num_bodies, 6, device=self.device)
# history buffers for quantities
# -- used to compute body accelerations numerically
self._last_body_vel_w = torch.zeros(self.num_instances, self.num_bodies, 6, device=self.device)
def _process_cfg(self):
"""Post processing of configuration parameters."""
# default state
# -- root state
# note: we cast to tuple to avoid torch/numpy type mismatch.
default_root_state = (
tuple(self.cfg.init_state.pos)
+ tuple(self.cfg.init_state.rot)
+ tuple(self.cfg.init_state.lin_vel)
+ tuple(self.cfg.init_state.ang_vel)
)
default_root_state = torch.tensor(default_root_state, dtype=torch.float, device=self.device)
self._data.default_root_state = default_root_state.repeat(self.num_instances, 1)
def _update_common_data(self, dt: float):
"""Update common quantities related to rigid objects.
Note:
This has been separated from the update function to allow for the child classes to
override the update function without having to worry about updating the common data.
"""
# -- body acceleration
self._data.body_acc_w[:] = (self._data.body_state_w[..., 7:] - self._last_body_vel_w) / dt
self._last_body_vel_w[:] = self._data.body_state_w[..., 7:]
# -- root state in body frame
self._data.root_vel_b[:, 0:3] = math_utils.quat_rotate_inverse(
self._data.root_quat_w, self._data.root_lin_vel_w
)
self._data.root_vel_b[:, 3:6] = math_utils.quat_rotate_inverse(
self._data.root_quat_w, self._data.root_ang_vel_w
)
self._data.projected_gravity_b[:] = math_utils.quat_rotate_inverse(self._data.root_quat_w, self.GRAVITY_VEC_W)
# -- heading direction of root
forward_w = math_utils.quat_apply(self._data.root_quat_w, self.FORWARD_VEC_B)
self._data.heading_w[:] = torch.atan2(forward_w[:, 1], forward_w[:, 0])
"""
Internal simulation callbacks.
"""
def _invalidate_initialize_callback(self, event):
"""Invalidates the scene elements."""
# call parent
super()._invalidate_initialize_callback(event)
# set all existing views to None to invalidate them
self._physics_sim_view = None
self._root_physx_view = None
| 18,412 | Python | 44.01956 | 119 | 0.632305 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/config/cassie.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for Agility robots.
The following configurations are available:
* :obj:`CASSIE_CFG`: Agility Cassie robot with simple PD controller for the legs
Reference: https://github.com/UMich-BipedLab/Cassie_Model/blob/master/urdf/cassie.urdf
"""
from __future__ import annotations
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.utils.assets import ISAAC_ORBIT_NUCLEUS_DIR
from ..articulation import ArticulationCfg
##
# Configuration
##
CASSIE_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/Agility/Cassie/cassie.usd",
activate_contact_sensors=True,
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
retain_accelerations=False,
linear_damping=0.0,
angular_damping=0.0,
max_linear_velocity=1000.0,
max_angular_velocity=1000.0,
max_depenetration_velocity=1.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True, solver_position_iteration_count=4, solver_velocity_iteration_count=0
),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.9),
joint_pos={
"hip_abduction_left": 0.1,
"hip_rotation_left": 0.0,
"hip_flexion_left": 1.0,
"thigh_joint_left": -1.8,
"ankle_joint_left": 1.57,
"toe_joint_left": -1.57,
"hip_abduction_right": -0.1,
"hip_rotation_right": 0.0,
"hip_flexion_right": 1.0,
"thigh_joint_right": -1.8,
"ankle_joint_right": 1.57,
"toe_joint_right": -1.57,
},
joint_vel={".*": 0.0},
),
soft_joint_pos_limit_factor=0.9,
actuators={
"legs": ImplicitActuatorCfg(
joint_names_expr=["hip_.*", "thigh_.*", "ankle_.*"],
effort_limit=200.0,
velocity_limit=10.0,
stiffness={
"hip_abduction.*": 100.0,
"hip_rotation.*": 100.0,
"hip_flexion.*": 200.0,
"thigh_joint.*": 200.0,
"ankle_joint.*": 200.0,
},
damping={
"hip_abduction.*": 3.0,
"hip_rotation.*": 3.0,
"hip_flexion.*": 6.0,
"thigh_joint.*": 6.0,
"ankle_joint.*": 6.0,
},
),
"toes": ImplicitActuatorCfg(
joint_names_expr=["toe_.*"],
effort_limit=20.0,
velocity_limit=10.0,
stiffness={
"toe_joint.*": 20.0,
},
damping={
"toe_joint.*": 1.0,
},
),
},
)
| 2,980 | Python | 29.731958 | 110 | 0.537919 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/articulation/articulation_data.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
import torch
from dataclasses import dataclass
from ..rigid_object import RigidObjectData
@dataclass
class ArticulationData(RigidObjectData):
"""Data container for an articulation."""
##
# Properties.
##
joint_names: list[str] = None
"""Joint names in the order parsed by the simulation view."""
##
# Default states.
##
default_joint_pos: torch.Tensor = None
"""Default joint positions of all joints. Shape is (num_instances, num_joints)."""
default_joint_vel: torch.Tensor = None
"""Default joint velocities of all joints. Shape is (num_instances, num_joints)."""
##
# Joint states <- From simulation.
##
joint_pos: torch.Tensor = None
"""Joint positions of all joints. Shape is (num_instances, num_joints)."""
joint_vel: torch.Tensor = None
"""Joint velocities of all joints. Shape is (num_instances, num_joints)."""
joint_acc: torch.Tensor = None
"""Joint acceleration of all joints. Shape is (num_instances, num_joints)."""
##
# Joint commands -- Set into simulation.
##
joint_pos_target: torch.Tensor = None
"""Joint position targets commanded by the user. Shape is (num_instances, num_joints).
For an implicit actuator model, the targets are directly set into the simulation.
For an explicit actuator model, the targets are used to compute the joint torques (see :attr:`applied_torque`),
which are then set into the simulation.
"""
joint_vel_target: torch.Tensor = None
"""Joint velocity targets commanded by the user. Shape is (num_instances, num_joints).
For an implicit actuator model, the targets are directly set into the simulation.
For an explicit actuator model, the targets are used to compute the joint torques (see :attr:`applied_torque`),
which are then set into the simulation.
"""
joint_effort_target: torch.Tensor = None
"""Joint effort targets commanded by the user. Shape is (num_instances, num_joints).
For an implicit actuator model, the targets are directly set into the simulation.
For an explicit actuator model, the targets are used to compute the joint torques (see :attr:`applied_torque`),
which are then set into the simulation.
"""
joint_stiffness: torch.Tensor = None
"""Joint stiffness provided to simulation. Shape is (num_instances, num_joints)."""
joint_damping: torch.Tensor = None
"""Joint damping provided to simulation. Shape is (num_instances, num_joints)."""
joint_armature: torch.Tensor = None
"""Joint armature provided to simulation. Shape is (num_instances, num_joints)."""
joint_friction: torch.Tensor = None
"""Joint friction provided to simulation. Shape is (num_instances, num_joints)."""
##
# Joint commands -- Explicit actuators.
##
computed_torque: torch.Tensor = None
"""Joint torques computed from the actuator model (before clipping). Shape is (num_instances, num_joints).
This quantity is the raw torque output from the actuator mode, before any clipping is applied.
It is exposed for users who want to inspect the computations inside the actuator model.
For instance, to penalize the learning agent for a difference between the computed and applied torques.
Note: The torques are zero for implicit actuator models.
"""
applied_torque: torch.Tensor = None
"""Joint torques applied from the actuator model (after clipping). Shape is (num_instances, num_joints).
These torques are set into the simulation, after clipping the :attr:`computed_torque` based on the
actuator model.
Note: The torques are zero for implicit actuator models.
"""
##
# Other Data.
##
soft_joint_pos_limits: torch.Tensor = None
"""Joint positions limits for all joints. Shape is (num_instances, num_joints, 2)."""
soft_joint_vel_limits: torch.Tensor = None
"""Joint velocity limits for all joints. Shape is (num_instances, num_joints)."""
gear_ratio: torch.Tensor = None
"""Gear ratio for relating motor torques to applied Joint torques. Shape is (num_instances, num_joints)."""
| 4,271 | Python | 34.305785 | 115 | 0.694919 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/articulation/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Sub-module for rigid articulated assets."""
from .articulation import Articulation
from .articulation_cfg import ArticulationCfg
from .articulation_data import ArticulationData
| 304 | Python | 26.72727 | 56 | 0.792763 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/articulation/articulation.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# Flag for pyright to ignore type errors in this file.
# pyright: reportPrivateUsage=false
from __future__ import annotations
import torch
import warnings
from collections.abc import Sequence
from prettytable import PrettyTable
from typing import TYPE_CHECKING
import carb
import omni.physics.tensors.impl.api as physx
from omni.isaac.core.utils.types import ArticulationActions
from pxr import UsdPhysics
import omni.isaac.orbit.sim as sim_utils
import omni.isaac.orbit.utils.math as math_utils
import omni.isaac.orbit.utils.string as string_utils
from omni.isaac.orbit.actuators import ActuatorBase, ActuatorBaseCfg, ImplicitActuator
from ..rigid_object import RigidObject
from .articulation_data import ArticulationData
if TYPE_CHECKING:
from .articulation_cfg import ArticulationCfg
class Articulation(RigidObject):
"""An articulation asset class.
An articulation is a collection of rigid bodies connected by joints. The joints can be either
fixed or actuated. The joints can be of different types, such as revolute, prismatic, D-6, etc.
However, the articulation class has currently been tested with revolute and prismatic joints.
The class supports both floating-base and fixed-base articulations. The type of articulation
is determined based on the root joint of the articulation. If the root joint is fixed, then
the articulation is considered a fixed-base system. Otherwise, it is considered a floating-base
system. This can be checked using the :attr:`Articulation.is_fixed_base` attribute.
For an asset to be considered an articulation, the root prim of the asset must have the
`USD ArticulationRootAPI`_. This API is used to define the sub-tree of the articulation using
the reduced coordinate formulation. On playing the simulation, the physics engine parses the
articulation root prim and creates the corresponding articulation in the physics engine. The
articulation root prim can be specified using the :attr:`AssetBaseCfg.prim_path` attribute.
The articulation class is a subclass of the :class:`RigidObject` class. Therefore, it inherits
all the functionality of the rigid object class. In case of an articulation, the :attr:`root_physx_view`
attribute corresponds to the articulation root view and can be used to access the articulation
related data.
The articulation class also provides the functionality to augment the simulation of an articulated
system with custom actuator models. These models can either be explicit or implicit, as detailed in
the :mod:`omni.isaac.orbit.actuators` module. The actuator models are specified using the
:attr:`ArticulationCfg.actuators` attribute. These are then parsed and used to initialize the
corresponding actuator models, when the simulation is played.
During the simulation step, the articulation class first applies the actuator models to compute
the joint commands based on the user-specified targets. These joint commands are then applied
into the simulation. The joint commands can be either position, velocity, or effort commands.
As an example, the following snippet shows how this can be used for position commands:
.. code-block:: python
# an example instance of the articulation class
my_articulation = Articulation(cfg)
# set joint position targets
my_articulation.set_joint_position_target(position)
# propagate the actuator models and apply the computed commands into the simulation
my_articulation.write_data_to_sim()
# step the simulation using the simulation context
sim_context.step()
# update the articulation state, where dt is the simulation time step
my_articulation.update(dt)
.. _`USD ArticulationRootAPI`: https://openusd.org/dev/api/class_usd_physics_articulation_root_a_p_i.html
"""
cfg: ArticulationCfg
"""Configuration instance for the articulations."""
def __init__(self, cfg: ArticulationCfg):
"""Initialize the articulation.
Args:
cfg: A configuration instance.
"""
super().__init__(cfg)
# container for data access
self._data = ArticulationData()
# data for storing actuator group
self.actuators: dict[str, ActuatorBase] = dict.fromkeys(self.cfg.actuators.keys())
"""
Properties
"""
@property
def data(self) -> ArticulationData:
return self._data
@property
def is_fixed_base(self) -> bool:
"""Whether the articulation is a fixed-base or floating-base system."""
return self.root_physx_view.shared_metatype.fixed_base
@property
def num_joints(self) -> int:
"""Number of joints in articulation."""
return self.root_physx_view.shared_metatype.dof_count
@property
def num_fixed_tendons(self) -> int:
"""Number of fixed tendons in articulation."""
return self.root_physx_view.max_fixed_tendons
@property
def num_bodies(self) -> int:
"""Number of bodies in articulation."""
return self.root_physx_view.shared_metatype.link_count
@property
def joint_names(self) -> list[str]:
"""Ordered names of joints in articulation."""
return self.root_physx_view.shared_metatype.dof_names
@property
def body_names(self) -> list[str]:
"""Ordered names of bodies in articulation."""
return self.root_physx_view.shared_metatype.link_names
@property
def root_physx_view(self) -> physx.ArticulationView:
"""Articulation view for the asset (PhysX).
Note:
Use this view with caution. It requires handling of tensors in a specific way.
"""
return self._root_physx_view
@property
def body_physx_view(self) -> physx.RigidBodyView:
"""Rigid body view for the asset (PhysX).
.. deprecated:: v0.3.0
In previous versions, this attribute returned the rigid body view over all the links of the articulation.
However, this led to confusion with the link ordering as they were not ordered in the same way as the
articulation view.
Therefore, this attribute will be removed in v0.4.0. Please use the :attr:`root_physx_view` attribute
instead.
"""
dep_msg = "The attribute 'body_physx_view' will be removed in v0.4.0. Please use 'root_physx_view' instead."
warnings.warn(dep_msg, DeprecationWarning)
carb.log_error(dep_msg)
return self._body_physx_view
"""
Operations.
"""
def reset(self, env_ids: Sequence[int] | None = None):
super().reset(env_ids)
# use ellipses object to skip initial indices.
if env_ids is None:
env_ids = slice(None)
# reset actuators
for actuator in self.actuators.values():
actuator.reset(env_ids)
def write_data_to_sim(self):
"""Write external wrenches and joint commands to the simulation.
If any explicit actuators are present, then the actuator models are used to compute the
joint commands. Otherwise, the joint commands are directly set into the simulation.
"""
# write external wrench
if self.has_external_wrench:
# apply external forces and torques
self._body_physx_view.apply_forces_and_torques_at_position(
force_data=self._external_force_body_view_b.view(-1, 3),
torque_data=self._external_torque_body_view_b.view(-1, 3),
position_data=None,
indices=self._ALL_BODY_INDICES,
is_global=False,
)
# apply actuator models
self._apply_actuator_model()
# write actions into simulation
self.root_physx_view.set_dof_actuation_forces(self._joint_effort_target_sim, self._ALL_INDICES)
# position and velocity targets only for implicit actuators
if self._has_implicit_actuators:
self.root_physx_view.set_dof_position_targets(self._joint_pos_target_sim, self._ALL_INDICES)
self.root_physx_view.set_dof_velocity_targets(self._joint_vel_target_sim, self._ALL_INDICES)
def update(self, dt: float):
# -- root state (note: we roll the quaternion to match the convention used in Isaac Sim -- wxyz)
self._data.root_state_w[:, :7] = self.root_physx_view.get_root_transforms()
self._data.root_state_w[:, 3:7] = math_utils.convert_quat(self._data.root_state_w[:, 3:7], to="wxyz")
self._data.root_state_w[:, 7:] = self.root_physx_view.get_root_velocities()
# -- body-state (note: we roll the quaternion to match the convention used in Isaac Sim -- wxyz)
self._data.body_state_w[..., :7] = self.root_physx_view.get_link_transforms()
self._data.body_state_w[..., 3:7] = math_utils.convert_quat(self._data.body_state_w[..., 3:7], to="wxyz")
self._data.body_state_w[..., 7:] = self.root_physx_view.get_link_velocities()
# -- joint states
self._data.joint_pos[:] = self.root_physx_view.get_dof_positions()
self._data.joint_vel[:] = self.root_physx_view.get_dof_velocities()
self._data.joint_acc[:] = (self._data.joint_vel - self._previous_joint_vel) / dt
# -- update common data
# note: these are computed in the base class
self._update_common_data(dt)
# -- update history buffers
self._previous_joint_vel[:] = self._data.joint_vel[:]
def find_joints(
self, name_keys: str | Sequence[str], joint_subset: list[str] | None = None, preserve_order: bool = False
) -> tuple[list[int], list[str]]:
"""Find joints in the articulation based on the name keys.
Please see the :func:`omni.isaac.orbit.utils.string.resolve_matching_names` function for more information
on the name matching.
Args:
name_keys: A regular expression or a list of regular expressions to match the joint names.
joint_subset: A subset of joints to search for. Defaults to None, which means all joints
in the articulation are searched.
preserve_order: Whether to preserve the order of the name keys in the output. Defaults to False.
Returns:
A tuple of lists containing the joint indices and names.
"""
if joint_subset is None:
joint_subset = self.joint_names
# find joints
return string_utils.resolve_matching_names(name_keys, joint_subset, preserve_order)
"""
Operations - Setters.
"""
def set_external_force_and_torque(
self,
forces: torch.Tensor,
torques: torch.Tensor,
body_ids: Sequence[int] | slice | None = None,
env_ids: Sequence[int] | None = None,
):
# call parent to set the external forces and torques into buffers
super().set_external_force_and_torque(forces, torques, body_ids, env_ids)
# reordering of the external forces and torques to match the body view ordering
if self.has_external_wrench:
self._external_force_body_view_b = self._external_force_b[:, self._body_view_ordering]
self._external_torque_body_view_b = self._external_torque_b[:, self._body_view_ordering]
"""
Operations - Writers.
"""
def write_root_pose_to_sim(self, root_pose: torch.Tensor, env_ids: Sequence[int] | None = None):
# resolve all indices
physx_env_ids = env_ids
if env_ids is None:
env_ids = slice(None)
physx_env_ids = self._ALL_INDICES
# note: we need to do this here since tensors are not set into simulation until step.
# set into internal buffers
self._data.root_state_w[env_ids, :7] = root_pose.clone()
# convert root quaternion from wxyz to xyzw
root_poses_xyzw = self._data.root_state_w[:, :7].clone()
root_poses_xyzw[:, 3:] = math_utils.convert_quat(root_poses_xyzw[:, 3:], to="xyzw")
# set into simulation
self.root_physx_view.set_root_transforms(root_poses_xyzw, indices=physx_env_ids)
def write_root_velocity_to_sim(self, root_velocity: torch.Tensor, env_ids: Sequence[int] | None = None):
# resolve all indices
physx_env_ids = env_ids
if env_ids is None:
env_ids = slice(None)
physx_env_ids = self._ALL_INDICES
# note: we need to do this here since tensors are not set into simulation until step.
# set into internal buffers
self._data.root_state_w[env_ids, 7:] = root_velocity.clone()
# set into simulation
self.root_physx_view.set_root_velocities(self._data.root_state_w[:, 7:], indices=physx_env_ids)
def write_joint_state_to_sim(
self,
position: torch.Tensor,
velocity: torch.Tensor,
joint_ids: Sequence[int] | slice | None = None,
env_ids: Sequence[int] | slice | None = None,
):
"""Write joint positions and velocities to the simulation.
Args:
position: Joint positions. Shape is (len(env_ids), len(joint_ids)).
velocity: Joint velocities. Shape is (len(env_ids), len(joint_ids)).
joint_ids: The joint indices to set the targets for. Defaults to None (all joints).
env_ids: The environment indices to set the targets for. Defaults to None (all environments).
"""
# resolve indices
physx_env_ids = env_ids
if env_ids is None:
env_ids = slice(None)
physx_env_ids = self._ALL_INDICES
if joint_ids is None:
joint_ids = slice(None)
# set into internal buffers
self._data.joint_pos[env_ids, joint_ids] = position
self._data.joint_vel[env_ids, joint_ids] = velocity
self._previous_joint_vel[env_ids, joint_ids] = velocity
self._data.joint_acc[env_ids, joint_ids] = 0.0
# set into simulation
self.root_physx_view.set_dof_positions(self._data.joint_pos, indices=physx_env_ids)
self.root_physx_view.set_dof_velocities(self._data.joint_vel, indices=physx_env_ids)
def write_joint_stiffness_to_sim(
self,
stiffness: torch.Tensor | float,
joint_ids: Sequence[int] | slice | None = None,
env_ids: Sequence[int] | None = None,
):
"""Write joint stiffness into the simulation.
Args:
stiffness: Joint stiffness. Shape is (len(env_ids), len(joint_ids)).
joint_ids: The joint indices to set the stiffness for. Defaults to None (all joints).
env_ids: The environment indices to set the stiffness for. Defaults to None (all environments).
"""
# note: This function isn't setting the values for actuator models. (#128)
# resolve indices
physx_env_ids = env_ids
if env_ids is None:
env_ids = slice(None)
physx_env_ids = self._ALL_INDICES
if joint_ids is None:
joint_ids = slice(None)
# set into internal buffers
self._data.joint_stiffness[env_ids, joint_ids] = stiffness
# set into simulation
self.root_physx_view.set_dof_stiffnesses(self._data.joint_stiffness.cpu(), indices=physx_env_ids.cpu())
def write_joint_damping_to_sim(
self,
damping: torch.Tensor | float,
joint_ids: Sequence[int] | slice | None = None,
env_ids: Sequence[int] | None = None,
):
"""Write joint damping into the simulation.
Args:
damping: Joint damping. Shape is (len(env_ids), len(joint_ids)).
joint_ids: The joint indices to set the damping for.
Defaults to None (all joints).
env_ids: The environment indices to set the damping for.
Defaults to None (all environments).
"""
# note: This function isn't setting the values for actuator models. (#128)
# resolve indices
physx_env_ids = env_ids
if env_ids is None:
env_ids = slice(None)
physx_env_ids = self._ALL_INDICES
if joint_ids is None:
joint_ids = slice(None)
# set into internal buffers
self._data.joint_damping[env_ids, joint_ids] = damping
# set into simulation
self.root_physx_view.set_dof_dampings(self._data.joint_damping.cpu(), indices=physx_env_ids.cpu())
def write_joint_effort_limit_to_sim(
self,
limits: torch.Tensor | float,
joint_ids: Sequence[int] | slice | None = None,
env_ids: Sequence[int] | None = None,
):
"""Write joint effort limits into the simulation.
Args:
limits: Joint torque limits. Shape is (len(env_ids), len(joint_ids)).
joint_ids: The joint indices to set the joint torque limits for. Defaults to None (all joints).
env_ids: The environment indices to set the joint torque limits for. Defaults to None (all environments).
"""
# note: This function isn't setting the values for actuator models. (#128)
# resolve indices
physx_env_ids = env_ids
if env_ids is None:
env_ids = slice(None)
physx_env_ids = self._ALL_INDICES
if joint_ids is None:
joint_ids = slice(None)
# move tensor to cpu if needed
if isinstance(limits, torch.Tensor):
limits = limits.cpu()
# set into internal buffers
torque_limit_all = self.root_physx_view.get_dof_max_forces()
torque_limit_all[env_ids, joint_ids] = limits
# set into simulation
self.root_physx_view.set_dof_max_forces(torque_limit_all.cpu(), indices=physx_env_ids.cpu())
def write_joint_armature_to_sim(
self,
armature: torch.Tensor | float,
joint_ids: Sequence[int] | slice | None = None,
env_ids: Sequence[int] | None = None,
):
"""Write joint armature into the simulation.
Args:
armature: Joint armature. Shape is (len(env_ids), len(joint_ids)).
joint_ids: The joint indices to set the joint torque limits for. Defaults to None (all joints).
env_ids: The environment indices to set the joint torque limits for. Defaults to None (all environments).
"""
# resolve indices
physx_env_ids = env_ids
if env_ids is None:
env_ids = slice(None)
physx_env_ids = self._ALL_INDICES
if joint_ids is None:
joint_ids = slice(None)
# set into internal buffers
self._data.joint_armature[env_ids, joint_ids] = armature
# set into simulation
self.root_physx_view.set_dof_armatures(self._data.joint_armature.cpu(), indices=physx_env_ids.cpu())
def write_joint_friction_to_sim(
self,
joint_friction: torch.Tensor | float,
joint_ids: Sequence[int] | slice | None = None,
env_ids: Sequence[int] | None = None,
):
"""Write joint friction into the simulation.
Args:
joint_friction: Joint friction. Shape is (len(env_ids), len(joint_ids)).
joint_ids: The joint indices to set the joint torque limits for. Defaults to None (all joints).
env_ids: The environment indices to set the joint torque limits for. Defaults to None (all environments).
"""
# resolve indices
physx_env_ids = env_ids
if env_ids is None:
env_ids = slice(None)
physx_env_ids = self._ALL_INDICES
if joint_ids is None:
joint_ids = slice(None)
# set into internal buffers
self._data.joint_friction[env_ids, joint_ids] = joint_friction
# set into simulation
self.root_physx_view.set_dof_friction_coefficients(self._data.joint_friction.cpu(), indices=physx_env_ids.cpu())
"""
Operations - State.
"""
def set_joint_position_target(
self, target: torch.Tensor, joint_ids: Sequence[int] | slice | None = None, env_ids: Sequence[int] | None = None
):
"""Set joint position targets into internal buffers.
.. note::
This function does not apply the joint targets to the simulation. It only fills the buffers with
the desired values. To apply the joint targets, call the :meth:`write_data_to_sim` function.
Args:
target: Joint position targets. Shape is (len(env_ids), len(joint_ids)).
joint_ids: The joint indices to set the targets for. Defaults to None (all joints).
env_ids: The environment indices to set the targets for. Defaults to None (all environments).
"""
# resolve indices
if env_ids is None:
env_ids = slice(None)
if joint_ids is None:
joint_ids = slice(None)
# set targets
self._data.joint_pos_target[env_ids, joint_ids] = target
def set_joint_velocity_target(
self, target: torch.Tensor, joint_ids: Sequence[int] | slice | None = None, env_ids: Sequence[int] | None = None
):
"""Set joint velocity targets into internal buffers.
.. note::
This function does not apply the joint targets to the simulation. It only fills the buffers with
the desired values. To apply the joint targets, call the :meth:`write_data_to_sim` function.
Args:
target: Joint velocity targets. Shape is (len(env_ids), len(joint_ids)).
joint_ids: The joint indices to set the targets for. Defaults to None (all joints).
env_ids: The environment indices to set the targets for. Defaults to None (all environments).
"""
# resolve indices
if env_ids is None:
env_ids = slice(None)
if joint_ids is None:
joint_ids = slice(None)
# set targets
self._data.joint_vel_target[env_ids, joint_ids] = target
def set_joint_effort_target(
self, target: torch.Tensor, joint_ids: Sequence[int] | slice | None = None, env_ids: Sequence[int] | None = None
):
"""Set joint efforts into internal buffers.
.. note::
This function does not apply the joint targets to the simulation. It only fills the buffers with
the desired values. To apply the joint targets, call the :meth:`write_data_to_sim` function.
Args:
target: Joint effort targets. Shape is (len(env_ids), len(joint_ids)).
joint_ids: The joint indices to set the targets for. Defaults to None (all joints).
env_ids: The environment indices to set the targets for. Defaults to None (all environments).
"""
# resolve indices
if env_ids is None:
env_ids = slice(None)
if joint_ids is None:
joint_ids = slice(None)
# set targets
self._data.joint_effort_target[env_ids, joint_ids] = target
"""
Internal helper.
"""
def _initialize_impl(self):
# create simulation view
self._physics_sim_view = physx.create_simulation_view(self._backend)
self._physics_sim_view.set_subspace_roots("/")
# obtain the first prim in the regex expression (all others are assumed to be a copy of this)
template_prim = sim_utils.find_first_matching_prim(self.cfg.prim_path)
if template_prim is None:
raise RuntimeError(f"Failed to find prim for expression: '{self.cfg.prim_path}'.")
template_prim_path = template_prim.GetPath().pathString
# find articulation root prims
root_prims = sim_utils.get_all_matching_child_prims(
template_prim_path, predicate=lambda prim: prim.HasAPI(UsdPhysics.ArticulationRootAPI)
)
if len(root_prims) != 1:
raise RuntimeError(
f"Failed to find a single articulation root when resolving '{self.cfg.prim_path}'."
f" Found roots '{root_prims}' under '{template_prim_path}'."
)
# resolve articulation root prim back into regex expression
root_prim_path = root_prims[0].GetPath().pathString
root_prim_path_expr = self.cfg.prim_path + root_prim_path[len(template_prim_path) :]
# -- articulation
self._root_physx_view = self._physics_sim_view.create_articulation_view(root_prim_path_expr.replace(".*", "*"))
# -- link views
# note: we use the root view to get the body names, but we use the body view to get the
# actual data. This is mainly needed to apply external forces to the bodies.
physx_body_names = self.root_physx_view.shared_metatype.link_names
body_names_regex = r"(" + "|".join(physx_body_names) + r")"
body_names_regex = f"{self.cfg.prim_path}/{body_names_regex}"
self._body_physx_view = self._physics_sim_view.create_rigid_body_view(body_names_regex.replace(".*", "*"))
# create ordering from articulation view to body view for body names
# note: we need to do this since the body view is not ordered in the same way as the articulation view
# -- root view
root_view_body_names = self.body_names
# -- body view
prim_paths = self._body_physx_view.prim_paths[: self.num_bodies]
body_view_body_names = [path.split("/")[-1] for path in prim_paths]
# -- mapping from articulation view to body view
self._body_view_ordering = [body_view_body_names.index(name) for name in root_view_body_names]
self._body_view_ordering = torch.tensor(self._body_view_ordering, dtype=torch.long, device=self.device)
# log information about the articulation
carb.log_info(f"Articulation initialized at: {self.cfg.prim_path} with root '{root_prim_path_expr}'.")
carb.log_info(f"Is fixed root: {self.is_fixed_base}")
carb.log_info(f"Number of bodies: {self.num_bodies}")
carb.log_info(f"Body names: {self.body_names}")
carb.log_info(f"Number of joints: {self.num_joints}")
carb.log_info(f"Joint names: {self.joint_names}")
carb.log_info(f"Number of fixed tendons: {self.num_fixed_tendons}")
# -- assert that parsing was successful
if set(physx_body_names) != set(self.body_names):
raise RuntimeError("Failed to parse all bodies properly in the articulation.")
# create buffers
self._create_buffers()
# process configuration
self._process_cfg()
self._process_actuators_cfg()
# validate configuration
self._validate_cfg()
# log joint information
self._log_articulation_joint_info()
def _create_buffers(self):
# allocate buffers
super()._create_buffers()
# history buffers
self._previous_joint_vel = torch.zeros(self.num_instances, self.num_joints, device=self.device)
# asset data
# -- properties
self._data.joint_names = self.joint_names
# -- joint states
self._data.joint_pos = torch.zeros(self.num_instances, self.num_joints, device=self.device)
self._data.joint_vel = torch.zeros_like(self._data.joint_pos)
self._data.joint_acc = torch.zeros_like(self._data.joint_pos)
self._data.default_joint_pos = torch.zeros_like(self._data.joint_pos)
self._data.default_joint_vel = torch.zeros_like(self._data.joint_pos)
# -- joint commands
self._data.joint_pos_target = torch.zeros_like(self._data.joint_pos)
self._data.joint_vel_target = torch.zeros_like(self._data.joint_pos)
self._data.joint_effort_target = torch.zeros_like(self._data.joint_pos)
self._data.joint_stiffness = torch.zeros_like(self._data.joint_pos)
self._data.joint_damping = torch.zeros_like(self._data.joint_pos)
self._data.joint_armature = torch.zeros_like(self._data.joint_pos)
self._data.joint_friction = torch.zeros_like(self._data.joint_pos)
# -- joint commands (explicit)
self._data.computed_torque = torch.zeros_like(self._data.joint_pos)
self._data.applied_torque = torch.zeros_like(self._data.joint_pos)
# -- other data
self._data.soft_joint_pos_limits = torch.zeros(self.num_instances, self.num_joints, 2, device=self.device)
self._data.soft_joint_vel_limits = torch.zeros(self.num_instances, self.num_joints, device=self.device)
self._data.gear_ratio = torch.ones(self.num_instances, self.num_joints, device=self.device)
# soft joint position limits (recommended not to be too close to limits).
joint_pos_limits = self.root_physx_view.get_dof_limits()
joint_pos_mean = (joint_pos_limits[..., 0] + joint_pos_limits[..., 1]) / 2
joint_pos_range = joint_pos_limits[..., 1] - joint_pos_limits[..., 0]
soft_limit_factor = self.cfg.soft_joint_pos_limit_factor
# add to data
self._data.soft_joint_pos_limits[..., 0] = joint_pos_mean - 0.5 * joint_pos_range * soft_limit_factor
self._data.soft_joint_pos_limits[..., 1] = joint_pos_mean + 0.5 * joint_pos_range * soft_limit_factor
# create buffers to store processed actions from actuator models
self._joint_pos_target_sim = torch.zeros_like(self._data.joint_pos_target)
self._joint_vel_target_sim = torch.zeros_like(self._data.joint_pos_target)
self._joint_effort_target_sim = torch.zeros_like(self._data.joint_pos_target)
def _process_cfg(self):
"""Post processing of configuration parameters."""
# default state
super()._process_cfg()
# -- joint state
# joint pos
indices_list, _, values_list = string_utils.resolve_matching_names_values(
self.cfg.init_state.joint_pos, self.joint_names
)
self._data.default_joint_pos[:, indices_list] = torch.tensor(values_list, device=self.device)
# joint vel
indices_list, _, values_list = string_utils.resolve_matching_names_values(
self.cfg.init_state.joint_vel, self.joint_names
)
self._data.default_joint_vel[:, indices_list] = torch.tensor(values_list, device=self.device)
"""
Internal helpers -- Actuators.
"""
def _process_actuators_cfg(self):
"""Process and apply articulation joint properties."""
# flag for implicit actuators
# if this is false, we by-pass certain checks when doing actuator-related operations
self._has_implicit_actuators = False
# cache the values coming from the usd
usd_stiffness = self.root_physx_view.get_dof_stiffnesses().clone()
usd_damping = self.root_physx_view.get_dof_dampings().clone()
usd_armature = self.root_physx_view.get_dof_armatures().clone()
usd_friction = self.root_physx_view.get_dof_friction_coefficients().clone()
usd_effort_limit = self.root_physx_view.get_dof_max_forces().clone()
usd_velocity_limit = self.root_physx_view.get_dof_max_velocities().clone()
# iterate over all actuator configurations
for actuator_name, actuator_cfg in self.cfg.actuators.items():
# type annotation for type checkers
actuator_cfg: ActuatorBaseCfg
# create actuator group
joint_ids, joint_names = self.find_joints(actuator_cfg.joint_names_expr)
# check if any joints are found
if len(joint_names) == 0:
raise ValueError(
f"No joints found for actuator group: {actuator_name} with joint name expression:"
f" {actuator_cfg.joint_names_expr}."
)
# create actuator collection
# note: for efficiency avoid indexing when over all indices
actuator: ActuatorBase = actuator_cfg.class_type(
cfg=actuator_cfg,
joint_names=joint_names,
joint_ids=slice(None) if len(joint_names) == self.num_joints else joint_ids,
num_envs=self.num_instances,
device=self.device,
stiffness=usd_stiffness[:, joint_ids],
damping=usd_damping[:, joint_ids],
armature=usd_armature[:, joint_ids],
friction=usd_friction[:, joint_ids],
effort_limit=usd_effort_limit[:, joint_ids],
velocity_limit=usd_velocity_limit[:, joint_ids],
)
# log information on actuator groups
carb.log_info(
f"Actuator collection: {actuator_name} with model '{actuator_cfg.class_type.__name__}' and"
f" joint names: {joint_names} [{joint_ids}]."
)
# store actuator group
self.actuators[actuator_name] = actuator
# set the passed gains and limits into the simulation
if isinstance(actuator, ImplicitActuator):
self._has_implicit_actuators = True
# the gains and limits are set into the simulation since actuator model is implicit
self.write_joint_stiffness_to_sim(actuator.stiffness, joint_ids=actuator.joint_indices)
self.write_joint_damping_to_sim(actuator.damping, joint_ids=actuator.joint_indices)
self.write_joint_effort_limit_to_sim(actuator.effort_limit, joint_ids=actuator.joint_indices)
self.write_joint_armature_to_sim(actuator.armature, joint_ids=actuator.joint_indices)
self.write_joint_friction_to_sim(actuator.friction, joint_ids=actuator.joint_indices)
else:
# the gains and limits are processed by the actuator model
# we set gains to zero, and torque limit to a high value in simulation to avoid any interference
self.write_joint_stiffness_to_sim(0.0, joint_ids=actuator.joint_indices)
self.write_joint_damping_to_sim(0.0, joint_ids=actuator.joint_indices)
self.write_joint_effort_limit_to_sim(1.0e9, joint_ids=actuator.joint_indices)
self.write_joint_armature_to_sim(actuator.armature, joint_ids=actuator.joint_indices)
self.write_joint_friction_to_sim(actuator.friction, joint_ids=actuator.joint_indices)
# perform some sanity checks to ensure actuators are prepared correctly
total_act_joints = sum(actuator.num_joints for actuator in self.actuators.values())
if total_act_joints != (self.num_joints - self.num_fixed_tendons):
carb.log_warn(
"Not all actuators are configured! Total number of actuated joints not equal to number of"
f" joints available: {total_act_joints} != {self.num_joints}."
)
def _apply_actuator_model(self):
"""Processes joint commands for the articulation by forwarding them to the actuators.
The actions are first processed using actuator models. Depending on the robot configuration,
the actuator models compute the joint level simulation commands and sets them into the PhysX buffers.
"""
# process actions per group
for actuator in self.actuators.values():
# prepare input for actuator model based on cached data
# TODO : A tensor dict would be nice to do the indexing of all tensors together
control_action = ArticulationActions(
joint_positions=self._data.joint_pos_target[:, actuator.joint_indices],
joint_velocities=self._data.joint_vel_target[:, actuator.joint_indices],
joint_efforts=self._data.joint_effort_target[:, actuator.joint_indices],
joint_indices=actuator.joint_indices,
)
# compute joint command from the actuator model
control_action = actuator.compute(
control_action,
joint_pos=self._data.joint_pos[:, actuator.joint_indices],
joint_vel=self._data.joint_vel[:, actuator.joint_indices],
)
# update targets (these are set into the simulation)
if control_action.joint_positions is not None:
self._joint_pos_target_sim[:, actuator.joint_indices] = control_action.joint_positions
if control_action.joint_velocities is not None:
self._joint_vel_target_sim[:, actuator.joint_indices] = control_action.joint_velocities
if control_action.joint_efforts is not None:
self._joint_effort_target_sim[:, actuator.joint_indices] = control_action.joint_efforts
# update state of the actuator model
# -- torques
self._data.computed_torque[:, actuator.joint_indices] = actuator.computed_effort
self._data.applied_torque[:, actuator.joint_indices] = actuator.applied_effort
# -- actuator data
self._data.soft_joint_vel_limits[:, actuator.joint_indices] = actuator.velocity_limit
# TODO: find a cleaner way to handle gear ratio. Only needed for variable gear ratio actuators.
if hasattr(actuator, "gear_ratio"):
self._data.gear_ratio[:, actuator.joint_indices] = actuator.gear_ratio
"""
Internal helpers -- Debugging.
"""
def _validate_cfg(self):
"""Validate the configuration after processing.
Note:
This function should be called only after the configuration has been processed and the buffers have been
created. Otherwise, some settings that are altered during processing may not be validated.
For instance, the actuator models may change the joint max velocity limits.
"""
# check that the default values are within the limits
joint_pos_limits = self.root_physx_view.get_dof_limits()[0].to(self.device)
out_of_range = self._data.default_joint_pos[0] < joint_pos_limits[:, 0]
out_of_range |= self._data.default_joint_pos[0] > joint_pos_limits[:, 1]
violated_indices = torch.nonzero(out_of_range, as_tuple=False).squeeze(-1)
# throw error if any of the default joint positions are out of the limits
if len(violated_indices) > 0:
# prepare message for violated joints
msg = "The following joints have default positions out of the limits: \n"
for idx in violated_indices:
joint_name = self.data.joint_names[idx]
joint_limits = joint_pos_limits[idx]
joint_pos = self.data.default_joint_pos[0, idx]
# add to message
msg += f"\t- '{joint_name}': {joint_pos:.3f} not in [{joint_limits[0]:.3f}, {joint_limits[1]:.3f}]\n"
raise ValueError(msg)
# check that the default joint velocities are within the limits
joint_max_vel = self.root_physx_view.get_dof_max_velocities()[0].to(self.device)
out_of_range = torch.abs(self._data.default_joint_vel[0]) > joint_max_vel
violated_indices = torch.nonzero(out_of_range, as_tuple=False).squeeze(-1)
if len(violated_indices) > 0:
# prepare message for violated joints
msg = "The following joints have default velocities out of the limits: \n"
for idx in violated_indices:
joint_name = self.data.joint_names[idx]
joint_limits = [-joint_max_vel[idx], joint_max_vel[idx]]
joint_vel = self.data.default_joint_vel[0, idx]
# add to message
msg += f"\t- '{joint_name}': {joint_vel:.3f} not in [{joint_limits[0]:.3f}, {joint_limits[1]:.3f}]\n"
raise ValueError(msg)
def _log_articulation_joint_info(self):
"""Log information about the articulation's simulated joints."""
# read out all joint parameters from simulation
# -- gains
stiffnesses = self.root_physx_view.get_dof_stiffnesses()[0].tolist()
dampings = self.root_physx_view.get_dof_dampings()[0].tolist()
# -- properties
armatures = self.root_physx_view.get_dof_armatures()[0].tolist()
frictions = self.root_physx_view.get_dof_friction_coefficients()[0].tolist()
# -- limits
position_limits = self.root_physx_view.get_dof_limits()[0].tolist()
velocity_limits = self.root_physx_view.get_dof_max_velocities()[0].tolist()
effort_limits = self.root_physx_view.get_dof_max_forces()[0].tolist()
# create table for term information
table = PrettyTable(float_format=".3f")
table.title = f"Simulation Joint Information (Prim path: {self.cfg.prim_path})"
table.field_names = [
"Index",
"Name",
"Stiffness",
"Damping",
"Armature",
"Friction",
"Position Limits",
"Velocity Limits",
"Effort Limits",
]
# set alignment of table columns
table.align["Name"] = "l"
# add info on each term
for index, name in enumerate(self.joint_names):
table.add_row([
index,
name,
stiffnesses[index],
dampings[index],
armatures[index],
frictions[index],
position_limits[index],
velocity_limits[index],
effort_limits[index],
])
# convert table to string
carb.log_info(f"Simulation parameters for joints in {self.cfg.prim_path}:\n" + table.get_string())
# read out all tendon parameters from simulation
if self.num_fixed_tendons > 0:
# -- gains
ft_stiffnesses = self.root_physx_view.get_fixed_tendon_stiffnesses()[0].tolist()
ft_dampings = self.root_physx_view.get_fixed_tendon_dampings()[0].tolist()
# -- limits
ft_limit_stiffnesses = self.root_physx_view.get_fixed_tendon_limit_stiffnesses()[0].tolist()
ft_limits = self.root_physx_view.get_fixed_tendon_limits()[0].tolist()
ft_rest_lengths = self.root_physx_view.get_fixed_tendon_rest_lengths()[0].tolist()
ft_offsets = self.root_physx_view.get_fixed_tendon_offsets()[0].tolist()
# create table for term information
tendon_table = PrettyTable(float_format=".3f")
tendon_table.title = f"Simulation Tendon Information (Prim path: {self.cfg.prim_path})"
tendon_table.field_names = [
"Index",
"Stiffness",
"Damping",
"Limit Stiffness",
"Limit",
"Rest Length",
"Offset",
]
# add info on each term
for index in range(self.num_fixed_tendons):
tendon_table.add_row([
index,
ft_stiffnesses[index],
ft_dampings[index],
ft_limit_stiffnesses[index],
ft_limits[index],
ft_rest_lengths[index],
ft_offsets[index],
])
# convert table to string
carb.log_info(f"Simulation parameters for tendons in {self.cfg.prim_path}:\n" + tendon_table.get_string())
| 43,405 | Python | 47.015487 | 120 | 0.627808 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/assets/articulation/articulation_cfg.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
from dataclasses import MISSING
from omni.isaac.orbit.actuators import ActuatorBaseCfg
from omni.isaac.orbit.utils import configclass
from ..rigid_object import RigidObjectCfg
from .articulation import Articulation
@configclass
class ArticulationCfg(RigidObjectCfg):
"""Configuration parameters for an articulation."""
class_type: type = Articulation
@configclass
class InitialStateCfg(RigidObjectCfg.InitialStateCfg):
"""Initial state of the articulation."""
# root position
joint_pos: dict[str, float] = {".*": 0.0}
"""Joint positions of the joints. Defaults to 0.0 for all joints."""
joint_vel: dict[str, float] = {".*": 0.0}
"""Joint velocities of the joints. Defaults to 0.0 for all joints."""
##
# Initialize configurations.
##
init_state: InitialStateCfg = InitialStateCfg()
"""Initial state of the articulated object. Defaults to identity pose with zero velocity and zero joint state."""
soft_joint_pos_limit_factor: float = 1.0
"""Fraction specifying the range of DOF position limits (parsed from the asset) to use.
Defaults to 1.0."""
actuators: dict[str, ActuatorBaseCfg] = MISSING
"""Actuators for the robot with corresponding joint names."""
| 1,427 | Python | 31.454545 | 117 | 0.703574 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/markers/visualization_markers.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""A class to coordinate groups of visual markers (such as spheres, frames or arrows)
using `UsdGeom.PointInstancer`_ class.
The class :class:`VisualizationMarkers` is used to create a group of visual markers and
visualize them in the viewport. The markers are represented as :class:`UsdGeom.PointInstancer` prims
in the USD stage. The markers are created as prototypes in the :class:`UsdGeom.PointInstancer` prim
and are instanced in the :class:`UsdGeom.PointInstancer` prim. The markers can be visualized by
passing the indices of the marker prototypes and their translations, orientations and scales.
The marker prototypes can be configured with the :class:`VisualizationMarkersCfg` class.
.. _UsdGeom.PointInstancer: https://graphics.pixar.com/usd/dev/api/class_usd_geom_point_instancer.html
"""
from __future__ import annotations
import numpy as np
import torch
from dataclasses import MISSING
import omni.isaac.core.utils.stage as stage_utils
import omni.kit.commands
import omni.physx.scripts.utils as physx_utils
from pxr import Gf, PhysxSchema, Sdf, Usd, UsdGeom, UsdPhysics, Vt
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.sim.spawners import SpawnerCfg
from omni.isaac.orbit.utils.configclass import configclass
from omni.isaac.orbit.utils.math import convert_quat
@configclass
class VisualizationMarkersCfg:
"""A class to configure a :class:`VisualizationMarkers`."""
prim_path: str = MISSING
"""The prim path where the :class:`UsdGeom.PointInstancer` will be created."""
markers: dict[str, SpawnerCfg] = MISSING
"""The dictionary of marker configurations.
The key is the name of the marker, and the value is the configuration of the marker.
The key is used to identify the marker in the class.
"""
class VisualizationMarkers:
"""A class to coordinate groups of visual markers (loaded from USD).
This class allows visualization of different UI markers in the scene, such as points and frames.
The class wraps around the `UsdGeom.PointInstancer`_ for efficient handling of objects
in the stage via instancing the created marker prototype prims.
A marker prototype prim is a reusable template prim used for defining variations of objects
in the scene. For example, a sphere prim can be used as a marker prototype prim to create
multiple sphere prims in the scene at different locations. Thus, prototype prims are useful
for creating multiple instances of the same prim in the scene.
The class parses the configuration to create different the marker prototypes into the stage. Each marker
prototype prim is created as a child of the :class:`UsdGeom.PointInstancer` prim. The prim path for the
the marker prim is resolved using the key of the marker in the :attr:`VisualizationMarkersCfg.markers`
dictionary. The marker prototypes are created using the :meth:`omni.isaac.core.utils.create_prim`
function, and then then instanced using :class:`UsdGeom.PointInstancer` prim to allow creating multiple
instances of the marker prims.
Switching between different marker prototypes is possible by calling the :meth:`visualize` method with
the prototype indices corresponding to the marker prototype. The prototype indices are based on the order
in the :attr:`VisualizationMarkersCfg.markers` dictionary. For example, if the dictionary has two markers,
"marker1" and "marker2", then their prototype indices are 0 and 1 respectively. The prototype indices
can be passed as a list or array of integers.
Usage:
The following snippet shows how to create 24 sphere markers with a radius of 1.0 at random translations
within the range [-1.0, 1.0]. The first 12 markers will be colored red and the rest will be colored green.
.. code-block:: python
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.markers import VisualizationMarkersCfg, VisualizationMarkers
# Create the markers configuration
# This creates two marker prototypes, "marker1" and "marker2" which are spheres with a radius of 1.0.
# The color of "marker1" is red and the color of "marker2" is green.
cfg = VisualizationMarkersCfg(
prim_path="/World/Visuals/testMarkers",
markers={
"marker1": sim_utils.SphereCfg(
radius=1.0,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.0, 0.0)),
),
"marker2": VisualizationMarkersCfg.SphereCfg(
radius=1.0,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 1.0, 0.0)),
),
}
)
# Create the markers instance
# This will create a UsdGeom.PointInstancer prim at the given path along with the marker prototypes.
marker = VisualizationMarkers(cfg)
# Set position of the marker
# -- randomly sample translations between -1.0 and 1.0
marker_translations = np.random.uniform(-1.0, 1.0, (24, 3))
# -- this will create 24 markers at the given translations
# note: the markers will all be `marker1` since the marker indices are not given
marker.visualize(translations=marker_translations)
# alter the markers based on their prototypes indices
# first 12 markers will be marker1 and the rest will be marker2
# 0 -> marker1, 1 -> marker2
marker_indices = [0] * 12 + [1] * 12
# this will change the marker prototypes at the given indices
# note: the translations of the markers will not be changed from the previous call
# since the translations are not given.
marker.visualize(marker_indices=marker_indices)
# alter the markers based on their prototypes indices and translations
marker.visualize(marker_indices=marker_indices, translations=marker_translations)
.. _UsdGeom.PointInstancer: https://graphics.pixar.com/usd/dev/api/class_usd_geom_point_instancer.html
"""
def __init__(self, cfg: VisualizationMarkersCfg):
"""Initialize the class.
When the class is initialized, the :class:`UsdGeom.PointInstancer` is created into the stage
and the marker prims are registered into it.
.. note::
If a prim already exists at the given path, the function will find the next free path
and create the :class:`UsdGeom.PointInstancer` prim there.
Args:
cfg: The configuration for the markers.
Raises:
ValueError: When no markers are provided in the :obj:`cfg`.
"""
# get next free path for the prim
prim_path = stage_utils.get_next_free_path(cfg.prim_path)
# create a new prim
stage = stage_utils.get_current_stage()
self._instancer_manager = UsdGeom.PointInstancer.Define(stage, prim_path)
# store inputs
self.prim_path = prim_path
self.cfg = cfg
# check if any markers is provided
if len(self.cfg.markers) == 0:
raise ValueError(f"The `cfg.markers` cannot be empty. Received: {self.cfg.markers}")
# create a child prim for the marker
self._add_markers_prototypes(self.cfg.markers)
# Note: We need to do this the first time to initialize the instancer.
# Otherwise, the instancer will not be "created" and the function `GetInstanceIndices()` will fail.
self._instancer_manager.GetProtoIndicesAttr().Set(list(range(self.num_prototypes)))
self._instancer_manager.GetPositionsAttr().Set([Gf.Vec3f(0.0)] * self.num_prototypes)
self._count = self.num_prototypes
def __str__(self) -> str:
"""Return: A string representation of the class."""
msg = f"VisualizationMarkers(prim_path={self.prim_path})"
msg += f"\n\tCount: {self.count}"
msg += f"\n\tNumber of prototypes: {self.num_prototypes}"
msg += "\n\tMarkers Prototypes:"
for index, (name, marker) in enumerate(self.cfg.markers.items()):
msg += f"\n\t\t[Index: {index}]: {name}: {marker.to_dict()}"
return msg
"""
Properties.
"""
@property
def num_prototypes(self) -> int:
"""The number of marker prototypes available."""
return len(self.cfg.markers)
@property
def count(self) -> int:
"""The total number of marker instances."""
# TODO: Update this when the USD API is available (Isaac Sim 2023.1)
# return self._instancer_manager.GetInstanceCount()
return self._count
"""
Operations.
"""
def set_visibility(self, visible: bool):
"""Sets the visibility of the markers.
The method does this through the USD API.
Args:
visible: flag to set the visibility.
"""
imageable = UsdGeom.Imageable(self._instancer_manager)
if visible:
imageable.MakeVisible()
else:
imageable.MakeInvisible()
def is_visible(self) -> bool:
"""Checks the visibility of the markers.
Returns:
True if the markers are visible, False otherwise.
"""
return self._instancer_manager.GetVisibilityAttr().Get() != UsdGeom.Tokens.invisible
def visualize(
self,
translations: np.ndarray | torch.Tensor | None = None,
orientations: np.ndarray | torch.Tensor | None = None,
scales: np.ndarray | torch.Tensor | None = None,
marker_indices: list[int] | np.ndarray | torch.Tensor | None = None,
):
"""Update markers in the viewport.
.. note::
If the prim `PointInstancer` is hidden in the stage, the function will simply return
without updating the markers. This helps in unnecessary computation when the markers
are not visible.
Whenever updating the markers, the input arrays must have the same number of elements
in the first dimension. If the number of elements is different, the `UsdGeom.PointInstancer`
will raise an error complaining about the mismatch.
Additionally, the function supports dynamic update of the markers. This means that the
number of markers can change between calls. For example, if you have 24 points that you
want to visualize, you can pass 24 translations, orientations, and scales. If you want to
visualize only 12 points, you can pass 12 translations, orientations, and scales. The
function will automatically update the number of markers in the scene.
The function will also update the marker prototypes based on their prototype indices. For instance,
if you have two marker prototypes, and you pass the following marker indices: [0, 1, 0, 1], the function
will update the first and third markers with the first prototype, and the second and fourth markers
with the second prototype. This is useful when you want to visualize different markers in the same
scene. The list of marker indices must have the same number of elements as the translations, orientations,
or scales. If the number of elements is different, the function will raise an error.
.. caution::
This function will update all the markers instanced from the prototypes. That means
if you have 24 markers, you will need to pass 24 translations, orientations, and scales.
If you want to update only a subset of the markers, you will need to handle the indices
yourself and pass the complete arrays to this function.
Args:
translations: Translations w.r.t. parent prim frame. Shape is (M, 3).
Defaults to None, which means left unchanged.
orientations: Quaternion orientations (w, x, y, z) w.r.t. parent prim frame. Shape is (M, 4).
Defaults to None, which means left unchanged.
scales: Scale applied before any rotation is applied. Shape is (M, 3).
Defaults to None, which means left unchanged.
marker_indices: Decides which marker prototype to visualize. Shape is (M).
Defaults to None, which means left unchanged provided that the total number of markers
is the same as the previous call. If the number of markers is different, the function
will update the number of markers in the scene.
Raises:
ValueError: When input arrays do not follow the expected shapes.
ValueError: When the function is called with all None arguments.
"""
# check if it is visible (if not then let's not waste time)
if not self.is_visible():
return
# check if we have any markers to visualize
num_markers = 0
# resolve inputs
# -- position
if translations is not None:
if isinstance(translations, torch.Tensor):
translations = translations.detach().cpu().numpy()
# check that shape is correct
if translations.shape[1] != 3 or len(translations.shape) != 2:
raise ValueError(f"Expected `translations` to have shape (M, 3). Received: {translations.shape}.")
# apply translations
self._instancer_manager.GetPositionsAttr().Set(Vt.Vec3fArray.FromNumpy(translations))
# update number of markers
num_markers = translations.shape[0]
# -- orientation
if orientations is not None:
if isinstance(orientations, torch.Tensor):
orientations = orientations.detach().cpu().numpy()
# check that shape is correct
if orientations.shape[1] != 4 or len(orientations.shape) != 2:
raise ValueError(f"Expected `orientations` to have shape (M, 4). Received: {orientations.shape}.")
# roll orientations from (w, x, y, z) to (x, y, z, w)
# internally USD expects (x, y, z, w)
orientations = convert_quat(orientations, to="xyzw")
# apply orientations
self._instancer_manager.GetOrientationsAttr().Set(Vt.QuathArray.FromNumpy(orientations))
# update number of markers
num_markers = orientations.shape[0]
# -- scales
if scales is not None:
if isinstance(scales, torch.Tensor):
scales = scales.detach().cpu().numpy()
# check that shape is correct
if scales.shape[1] != 3 or len(scales.shape) != 2:
raise ValueError(f"Expected `scales` to have shape (M, 3). Received: {scales.shape}.")
# apply scales
self._instancer_manager.GetScalesAttr().Set(Vt.Vec3fArray.FromNumpy(scales))
# update number of markers
num_markers = scales.shape[0]
# -- status
if marker_indices is not None or num_markers != self._count:
# apply marker indices
if marker_indices is not None:
if isinstance(marker_indices, torch.Tensor):
marker_indices = marker_indices.detach().cpu().numpy()
elif isinstance(marker_indices, list):
marker_indices = np.array(marker_indices)
# check that shape is correct
if len(marker_indices.shape) != 1:
raise ValueError(f"Expected `marker_indices` to have shape (M,). Received: {marker_indices.shape}.")
# apply proto indices
self._instancer_manager.GetProtoIndicesAttr().Set(Vt.IntArray.FromNumpy(marker_indices))
# update number of markers
num_markers = marker_indices.shape[0]
else:
# check that number of markers is not zero
if num_markers == 0:
raise ValueError("Number of markers cannot be zero! Hint: The function was called with no inputs?")
# set all markers to be the first prototype
self._instancer_manager.GetProtoIndicesAttr().Set([0] * num_markers)
# set number of markers
self._count = num_markers
"""
Helper functions.
"""
def _add_markers_prototypes(self, markers_cfg: dict[str, sim_utils.SpawnerCfg]):
"""Adds markers prototypes to the scene and sets the markers instancer to use them."""
# add markers based on config
for name, cfg in markers_cfg.items():
# resolve prim path
marker_prim_path = f"{self.prim_path}/{name}"
# create a child prim for the marker
prim = cfg.func(prim_path=marker_prim_path, cfg=cfg)
# make the asset uninstanceable (in case it is)
# point instancer defines its own prototypes so if an asset is already instanced, this doesn't work.
self._process_prototype_prim(prim)
# remove any physics on the markers because they are only for visualization!
physx_utils.removeRigidBodySubtree(prim)
# add child reference to point instancer
self._instancer_manager.GetPrototypesRel().AddTarget(marker_prim_path)
# check that we loaded all the prototypes
prototypes = self._instancer_manager.GetPrototypesRel().GetTargets()
if len(prototypes) != len(markers_cfg):
raise RuntimeError(
f"Failed to load all the prototypes. Expected: {len(markers_cfg)}. Received: {len(prototypes)}."
)
def _process_prototype_prim(self, prim: Usd.Prim):
"""Process a prim and its descendants to make them suitable for defining prototypes.
Point instancer defines its own prototypes so if an asset is already instanced, this doesn't work.
This function checks if the prim at the specified prim path and its descendants are instanced.
If so, it makes the respective prim uninstanceable by disabling instancing on the prim.
Additionally, it makes the prim invisible to secondary rays. This is useful when we do not want
to see the marker prims on camera images.
Args:
prim_path: The prim path to check.
stage: The stage where the prim exists.
Defaults to None, in which case the current stage is used.
"""
# check if prim is valid
if not prim.IsValid():
raise ValueError(f"Prim at path '{prim.GetPrimAtPath()}' is not valid.")
# iterate over all prims under prim-path
all_prims = [prim]
while len(all_prims) > 0:
# get current prim
child_prim = all_prims.pop(0)
# check if it is physics body -> if so, remove it
if child_prim.HasAPI(UsdPhysics.ArticulationRootAPI):
child_prim.RemoveAPI(UsdPhysics.ArticulationRootAPI)
child_prim.RemoveAPI(PhysxSchema.PhysxArticulationAPI)
if child_prim.HasAPI(UsdPhysics.RigidBodyAPI):
child_prim.RemoveAPI(UsdPhysics.RigidBodyAPI)
child_prim.RemoveAPI(PhysxSchema.PhysxRigidBodyAPI)
if child_prim.IsA(UsdPhysics.Joint):
child_prim.GetAttribute("physics:jointEnabled").Set(False)
# check if prim is instanced -> if so, make it uninstanceable
if child_prim.IsInstance():
child_prim.SetInstanceable(False)
# check if prim is a mesh -> if so, make it invisible to secondary rays
if child_prim.IsA(UsdGeom.Gprim):
# invisible to secondary rays such as depth images
omni.kit.commands.execute(
"ChangePropertyCommand",
prop_path=Sdf.Path(f"{child_prim.GetPrimPath().pathString}.primvars:invisibleToSecondaryRays"),
value=True,
prev=None,
type_to_create_if_not_exist=Sdf.ValueTypeNames.Bool,
)
# add children to list
all_prims += child_prim.GetChildren()
| 20,362 | Python | 48.787286 | 120 | 0.646646 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/markers/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Sub-package for marker utilities to simplify creation of UI elements in the GUI.
Currently, the sub-package provides the following classes:
* :class:`VisualizationMarkers` for creating a group of markers using `UsdGeom.PointInstancer
<https://graphics.pixar.com/usd/dev/api/class_usd_geom_point_instancer.html>`_.
.. note::
For some simple use-cases, it may be sufficient to use the debug drawing utilities from Isaac Sim.
The debug drawing API is available in the `omni.isaac.debug_drawing`_ module. It allows drawing of
points and splines efficiently on the UI.
.. _omni.isaac.debug_drawing: https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/ext_omni_isaac_debug_drawing.html
"""
from __future__ import annotations
from .config import * # noqa: F401, F403
from .visualization_markers import VisualizationMarkers, VisualizationMarkersCfg
| 1,003 | Python | 34.857142 | 127 | 0.761715 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/omni/isaac/orbit/markers/config/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
from __future__ import annotations
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.markers.visualization_markers import VisualizationMarkersCfg
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
##
# Sensors.
##
RAY_CASTER_MARKER_CFG = VisualizationMarkersCfg(
markers={
"hit": sim_utils.SphereCfg(
radius=0.02,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.0, 0.0)),
),
},
)
"""Configuration for the ray-caster marker."""
CONTACT_SENSOR_MARKER_CFG = VisualizationMarkersCfg(
markers={
"contact": sim_utils.SphereCfg(
radius=0.02,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.0, 0.0)),
),
"no_contact": sim_utils.SphereCfg(
radius=0.02,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 1.0, 0.0)),
visible=False,
),
},
)
"""Configuration for the contact sensor marker."""
##
# Frames.
##
FRAME_MARKER_CFG = VisualizationMarkersCfg(
markers={
"frame": sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/UIElements/frame_prim.usd",
scale=(0.5, 0.5, 0.5),
)
}
)
"""Configuration for the frame marker."""
RED_ARROW_X_MARKER_CFG = VisualizationMarkersCfg(
markers={
"arrow": sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/UIElements/arrow_x.usd",
scale=(1.0, 0.1, 0.1),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.0, 0.0)),
)
}
)
"""Configuration for the red arrow marker (along x-direction)."""
BLUE_ARROW_X_MARKER_CFG = VisualizationMarkersCfg(
markers={
"arrow": sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/UIElements/arrow_x.usd",
scale=(1.0, 0.1, 0.1),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 0.0, 1.0)),
)
}
)
"""Configuration for the blue arrow marker (along x-direction)."""
GREEN_ARROW_X_MARKER_CFG = VisualizationMarkersCfg(
markers={
"arrow": sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/UIElements/arrow_x.usd",
scale=(1.0, 0.1, 0.1),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 1.0, 0.0)),
)
}
)
"""Configuration for the green arrow marker (along x-direction)."""
##
# Goals.
##
CUBOID_MARKER_CFG = VisualizationMarkersCfg(
markers={
"cuboid": sim_utils.CuboidCfg(
size=(0.1, 0.1, 0.1),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.0, 0.0)),
),
}
)
"""Configuration for the cuboid marker."""
POSITION_GOAL_MARKER_CFG = VisualizationMarkersCfg(
markers={
"target_far": sim_utils.SphereCfg(
radius=0.01,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.0, 0.0)),
),
"target_near": sim_utils.SphereCfg(
radius=0.01,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 1.0, 0.0)),
),
"target_invisible": sim_utils.SphereCfg(
radius=0.01,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 0.0, 1.0)),
visible=False,
),
}
)
"""Configuration for the end-effector tracking marker."""
| 3,547 | Python | 27.384 | 87 | 0.610939 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/docs/CHANGELOG.rst | Changelog
---------
0.15.10 (2024-04-11)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed sharing of the same memory address between returned tensors from observation terms
in the :class:`omni.isaac.orbit.managers.ObservationManager` class. Earlier, the returned
tensors could map to the same memory address, causing issues when the tensors were modified
during scaling, clipping or other operations.
0.15.9 (2024-04-04)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed assignment of individual termination terms inside the :class:`omni.isaac.orbit.managers.TerminationManager`
class. Earlier, the terms were being assigned their values through an OR operation which resulted in incorrect
values. This regression was introduced in version 0.15.1.
0.15.8 (2024-04-02)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added option to define ordering of points for the mesh-grid generation in the
:func:`omni.isaac.orbit.sensors.ray_caster.patterns.grid_pattern`. This parameter defaults to 'xy'
for backward compatibility.
0.15.7 (2024-03-28)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Adds option to return indices/data in the specified query keys order in
:class:`omni.isaac.orbit.managers.SceneEntityCfg` class, and the respective
:func:`omni.isaac.orbit.utils.string.resolve_matching_names_values` and
:func:`omni.isaac.orbit.utils.string.resolve_matching_names` functions.
0.15.6 (2024-03-28)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Extended the :class:`omni.isaac.orbit.app.AppLauncher` class to support the loading of experience files
from the command line. This allows users to load a specific experience file when running the application
(such as for multi-camera rendering or headless mode).
Changed
^^^^^^^
* Changed default loading of experience files in the :class:`omni.isaac.orbit.app.AppLauncher` class from the ones
provided by Isaac Sim to the ones provided in Orbit's ``source/apps`` directory.
0.15.5 (2024-03-23)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the env origins in :meth:`_compute_env_origins_grid` of :class:`omni.isaac.orbit.terrain.TerrainImporter`
to match that obtained from the Isaac Sim :class:`omni.isaac.cloner.GridCloner` class.
Added
^^^^^
* Added unit test to ensure consistency between environment origins generated by IsaacSim's Grid Cloner and those
produced by the TerrainImporter.
0.15.4 (2024-03-22)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the :class:`omni.isaac.orbit.envs.mdp.actions.NonHolonomicActionCfg` class to use
the correct variable when applying actions.
0.15.3 (2024-03-21)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added unit test to check that :class:`omni.isaac.orbit.scene.InteractiveScene` entity data is not shared between separate instances.
Fixed
^^^^^
* Moved class variables in :class:`omni.isaac.orbit.scene.InteractiveScene` to correctly be assigned as
instance variables.
* Removed custom ``__del__`` magic method from :class:`omni.isaac.orbit.scene.InteractiveScene`.
0.15.2 (2024-03-21)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added resolving of relative paths for the main asset USD file when using the
:class:`omni.isaac.orbit.sim.converters.UrdfConverter` class. This is to ensure that the material paths are
resolved correctly when the main asset file is moved to a different location.
0.15.1 (2024-03-19)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the imitation learning workflow example script, updating Orbit and Robomimic API calls.
* Removed the resetting of :attr:`_term_dones` in the :meth:`omni.isaac.orbit.managers.TerminationManager.reset`.
Previously, the environment cleared out all the terms. However, it impaired reading the specific term's values externally.
0.15.0 (2024-03-17)
~~~~~~~~~~~~~~~~~~~
Deprecated
^^^^^^^^^^
* Renamed :class:`omni.isaac.orbit.managers.RandomizationManager` to :class:`omni.isaac.orbit.managers.EventManager`
class for clarification as the manager takes care of events such as reset in addition to pure randomizations.
* Renamed :class:`omni.isaac.orbit.managers.RandomizationTermCfg` to :class:`omni.isaac.orbit.managers.EventTermCfg`
for consistency with the class name change.
0.14.1 (2024-03-16)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added simulation schemas for joint drive and fixed tendons. These can be configured for assets imported
from file formats.
* Added logging of tendon properties to the articulation class (if they are present in the USD prim).
0.14.0 (2024-03-15)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the ordering of body names used in the :class:`omni.isaac.orbit.assets.Articulation` class. Earlier,
the body names were not following the same ordering as the bodies in the articulation. This led
to issues when using the body names to access data related to the links from the articulation view
(such as Jacobians, mass matrices, etc.).
Removed
^^^^^^^
* Removed the attribute :attr:`body_physx_view` from the :class:`omni.isaac.orbit.assets.RigidObject`
and :class:`omni.isaac.orbit.assets.Articulation` classes. These were causing confusions when used
with articulation view since the body names were not following the same ordering.
0.13.1 (2024-03-14)
~~~~~~~~~~~~~~~~~~~
Removed
^^^^^^^
* Removed the :mod:`omni.isaac.orbit.compat` module. This module was used to provide compatibility
with older versions of Isaac Sim. It is no longer needed since we have most of the functionality
absorbed into the main classes.
0.13.0 (2024-03-12)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added support for the following data types inside the :class:`omni.isaac.orbit.sensors.Camera` class:
``instance_segmentation_fast`` and ``instance_id_segmentation_fast``. These are are GPU-supported annotations
and are faster than the regular annotations.
Fixed
^^^^^
* Fixed handling of semantic filtering inside the :class:`omni.isaac.orbit.sensors.Camera` class. Earlier,
the annotator was given ``semanticTypes`` as an argument. However, with Isaac Sim 2023.1, the annotator
does not accept this argument. Instead the mapping needs to be set to the synthetic data interface directly.
* Fixed the return shape of colored images for segmentation data types inside the
:class:`omni.isaac.orbit.sensors.Camera` class. Earlier, the images were always returned as ``int32``. Now,
they are casted to ``uint8`` 4-channel array before returning if colorization is enabled for the annotation type.
Removed
^^^^^^^
* Dropped support for ``instance_segmentation`` and ``instance_id_segmentation`` annotations in the
:class:`omni.isaac.orbit.sensors.Camera` class. Their "fast" counterparts should be used instead.
* Renamed the argument :attr:`omni.isaac.orbit.sensors.CameraCfg.semantic_types` to
:attr:`omni.isaac.orbit.sensors.CameraCfg.semantic_filter`. This is more aligned with Replicator's terminology
for semantic filter predicates.
* Replaced the argument :attr:`omni.isaac.orbit.sensors.CameraCfg.colorize` with separate colorized
arguments for each annotation type (:attr:`~omni.isaac.orbit.sensors.CameraCfg.colorize_instance_segmentation`,
:attr:`~omni.isaac.orbit.sensors.CameraCfg.colorize_instance_id_segmentation`, and
:attr:`~omni.isaac.orbit.sensors.CameraCfg.colorize_semantic_segmentation`).
0.12.4 (2024-03-11)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Adapted randomization terms to deal with ``slice`` for the body indices. Earlier, the terms were not
able to handle the slice object and were throwing an error.
* Added ``slice`` type-hinting to all body and joint related methods in the rigid body and articulation
classes. This is to make it clear that the methods can handle both list of indices and slices.
0.12.3 (2024-03-11)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added signal handler to the :class:`omni.isaac.orbit.app.AppLauncher` class to catch the ``SIGINT`` signal
and close the application gracefully. This is to prevent the application from crashing when the user
presses ``Ctrl+C`` to close the application.
0.12.2 (2024-03-10)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added observation terms for states of a rigid object in world frame.
* Added randomization terms to set root state with randomized orientation and joint state within user-specified limits.
* Added reward term for penalizing specific termination terms.
Fixed
^^^^^
* Improved sampling of states inside randomization terms. Earlier, the code did multiple torch calls
for sampling different components of the vector. Now, it uses a single call to sample the entire vector.
0.12.1 (2024-03-09)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added an option to the last actions observation term to get a specific term by name from the action manager.
If None, the behavior remains the same as before (the entire action is returned).
0.12.0 (2024-03-08)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added functionality to sample flat patches on a generated terrain. This can be configured using
:attr:`omni.isaac.orbit.terrains.SubTerrainBaseCfg.flat_patch_sampling` attribute.
* Added a randomization function for setting terrain-aware root state. Through this, an asset can be
reset to a randomly sampled flat patches.
Fixed
^^^^^
* Separated normal and terrain-base position commands. The terrain based commands rely on the
terrain to sample flat patches for setting the target position.
* Fixed command resample termination function.
Changed
^^^^^^^
* Added the attribute :attr:`omni.isaac.orbit.envs.mdp.commands.UniformVelocityCommandCfg.heading_control_stiffness`
to control the stiffness of the heading control term in the velocity command term. Earlier, this was
hard-coded to 0.5 inside the term.
Removed
^^^^^^^
* Removed the function :meth:`sample_new_targets` in the terrain importer. Instead the attribute
:attr:`omni.isaac.orbit.terrains.TerrainImporter.flat_patches` should be used to sample new targets.
0.11.3 (2024-03-04)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Corrects the functions :func:`omni.isaac.orbit.utils.math.axis_angle_from_quat` and :func:`omni.isaac.orbit.utils.math.quat_error_magnitude`
to accept tensors of the form (..., 4) instead of (N, 4). This brings us in line with our documentation and also upgrades one of our functions
to handle higher dimensions.
0.11.2 (2024-03-04)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added checks for default joint position and joint velocity in the articulation class. This is to prevent
users from configuring values for these quantities that might be outside the valid range from the simulation.
0.11.1 (2024-02-29)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Replaced the default values for ``joint_ids`` and ``body_ids`` from ``None`` to ``slice(None)``
in the :class:`omni.isaac.orbit.managers.SceneEntityCfg`.
* Adapted rewards and observations terms so that the users can query a subset of joints and bodies.
0.11.0 (2024-02-27)
~~~~~~~~~~~~~~~~~~~
Removed
^^^^^^^
* Dropped support for Isaac Sim<=2022.2. As part of this, removed the components of :class:`omni.isaac.orbit.app.AppLauncher`
which handled ROS extension loading. We no longer need them in Isaac Sim>=2023.1 to control the load order to avoid crashes.
* Upgraded Dockerfile to use ISAACSIM_VERSION=2023.1.1 by default.
0.10.28 (2024-02-29)
~~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Implemented relative and moving average joint position action terms. These allow the user to specify
the target joint positions as relative to the current joint positions or as a moving average of the
joint positions over a window of time.
0.10.27 (2024-02-28)
~~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added UI feature to start and stop animation recording in the stage when running an environment.
To enable this feature, please pass the argument ``--disable_fabric`` to the environment script to allow
USD read/write operations. Be aware that this will slow down the simulation.
0.10.26 (2024-02-26)
~~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added a viewport camera controller class to the :class:`omni.isaac.orbit.envs.BaseEnv`. This is useful
for applications where the user wants to render the viewport from different perspectives even when the
simulation is running in headless mode.
0.10.25 (2024-02-26)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Ensures that all path arguments in :mod:`omni.isaac.orbit.sim.utils` are cast to ``str``. Previously,
we had handled path types as strings without casting.
0.10.24 (2024-02-26)
~~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added tracking of contact time in the :class:`omni.isaac.orbit.sensors.ContactSensor` class. Previously,
only the air time was being tracked.
* Added contact force threshold, :attr:`omni.isaac.orbit.sensors.ContactSensorCfg.force_threshold`, to detect
when the contact sensor is in contact. Previously, this was set to hard-coded 1.0 in the sensor class.
0.10.23 (2024-02-21)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixes the order of size arguments in :meth:`omni.isaac.orbit.terrains.height_field.random_uniform_terrain`. Previously, the function would crash if the size along x and y were not the same.
0.10.22 (2024-02-14)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed "divide by zero" bug in :class:`~omni.isaac.orbit.sim.SimulationContext` when setting gravity vector.
Now, it is correctly disabled when the gravity vector is set to zero.
0.10.21 (2024-02-12)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the printing of articulation joint information when the articulation has only one joint.
Earlier, the function was performing a squeeze operation on the tensor, which caused an error when
trying to index the tensor of shape (1,).
0.10.20 (2024-02-12)
~~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Adds :attr:`omni.isaac.orbit.sim.PhysxCfg.enable_enhanced_determinism` to enable improved
determinism from PhysX. Please note this comes at the expense of performance.
0.10.19 (2024-02-08)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed environment closing so that articulations, objects, and sensors are cleared properly.
0.10.18 (2024-02-05)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Pinned :mod:`torch` version to 2.0.1 in the setup.py to keep parity version of :mod:`torch` supplied by
Isaac 2023.1.1, and prevent version incompatibility between :mod:`torch` ==2.2 and
:mod:`typing-extensions` ==3.7.4.3
0.10.17 (2024-02-02)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^^
* Fixed carb setting ``/app/livestream/enabled`` to be set as False unless live-streaming is specified
by :class:`omni.isaac.orbit.app.AppLauncher` settings. This fixes the logic of :meth:`SimulationContext.render`,
which depended on the config in previous versions of Isaac defaulting to false for this setting.
0.10.16 (2024-01-29)
~~~~~~~~~~~~~~~~~~~~
Added
^^^^^^
* Added an offset parameter to the height scan observation term. This allows the user to specify the
height offset of the scan from the tracked body. Previously it was hard-coded to be 0.5.
0.10.15 (2024-01-29)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed joint torque computation for implicit actuators. Earlier, the torque was always zero for implicit
actuators. Now, it is computed approximately by applying the PD law.
0.10.14 (2024-01-22)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the tensor shape of :attr:`omni.isaac.orbit.sensors.ContactSensorData.force_matrix_w`. Earlier, the reshaping
led to a mismatch with the data obtained from PhysX.
0.10.13 (2024-01-15)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed running of environments with a single instance even if the :attr:`replicate_physics`` flag is set to True.
0.10.12 (2024-01-10)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed indexing of source and target frames in the :class:`omni.isaac.orbit.sensors.FrameTransformer` class.
Earlier, it always assumed that the source frame body is at index 0. Now, it uses the body index of the
source frame to compute the transformation.
Deprecated
^^^^^^^^^^
* Renamed quantities in the :class:`omni.isaac.orbit.sensors.FrameTransformerData` class to be more
consistent with the terminology used in the asset classes. The following quantities are deprecated:
* ``target_rot_w`` -> ``target_quat_w``
* ``source_rot_w`` -> ``source_quat_w``
* ``target_rot_source`` -> ``target_quat_source``
0.10.11 (2024-01-08)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed attribute error raised when calling the :class:`omni.isaac.orbit.envs.mdp.TerrainBasedPositionCommand`
command term.
* Added a dummy function in :class:`omni.isaac.orbit.terrain.TerrainImporter` that returns environment
origins as terrain-aware sampled targets. This function should be implemented by child classes based on
the terrain type.
0.10.10 (2023-12-21)
~~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed reliance on non-existent ``Viewport`` in :class:`omni.isaac.orbit.sim.SimulationContext` when loading livestreaming
by ensuring that the extension ``omni.kit.viewport.window`` is enabled in :class:`omni.isaac.orbit.app.AppLauncher` when
livestreaming is enabled
0.10.9 (2023-12-21)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed invalidation of physics views inside the asset and sensor classes. Earlier, they were left initialized
even when the simulation was stopped. This caused issues when closing the application.
0.10.8 (2023-12-20)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the :class:`omni.isaac.orbit.envs.mdp.actions.DifferentialInverseKinematicsAction` class
to account for the offset pose of the end-effector.
0.10.7 (2023-12-19)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added a check to ray-cast and camera sensor classes to ensure that the sensor prim path does not
have a regex expression at its leaf. For instance, ``/World/Robot/camera_.*`` is not supported
for these sensor types. This behavior needs to be fixed in the future.
0.10.6 (2023-12-19)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added support for using articulations as visualization markers. This disables all physics APIs from
the articulation and allows the user to use it as a visualization marker. It is useful for creating
visualization markers for the end-effectors or base of the robot.
Fixed
^^^^^
* Fixed hiding of debug markers from secondary images when using the
:class:`omni.isaac.orbit.markers.VisualizationMarkers` class. Earlier, the properties were applied on
the XForm prim instead of the Mesh prim.
0.10.5 (2023-12-18)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed test ``check_base_env_anymal_locomotion.py``, which
previously called :func:`torch.jit.load` with the path to a policy (which would work
for a local file), rather than calling
:func:`omni.isaac.orbit.utils.assets.read_file` on the path to get the file itself.
0.10.4 (2023-12-14)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed potentially breaking import of omni.kit.widget.toolbar by ensuring that
if live-stream is enabled, then the :mod:`omni.kit.widget.toolbar`
extension is loaded.
0.10.3 (2023-12-12)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the attribute :attr:`omni.isaac.orbit.actuators.ActuatorNetMLPCfg.input_order`
to specify the order of the input tensors to the MLP network.
Fixed
^^^^^
* Fixed computation of metrics for the velocity command term. Earlier, the norm was being computed
over the entire batch instead of the last dimension.
* Fixed the clipping inside the :class:`omni.isaac.orbit.actuators.DCMotor` class. Earlier, it was
not able to handle the case when configured saturation limit was set to None.
0.10.2 (2023-12-12)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added a check in the simulation stop callback in the :class:`omni.isaac.orbit.sim.SimulationContext` class
to not render when an exception is raised. The while loop in the callback was preventing the application
from closing when an exception was raised.
0.10.1 (2023-12-06)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added command manager class with terms defined by :class:`omni.isaac.orbit.managers.CommandTerm`. This
allow for multiple types of command generators to be used in the same environment.
0.10.0 (2023-12-04)
~~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Modified the sensor and asset base classes to use the underlying PhysX views instead of Isaac Sim views.
Using Isaac Sim classes led to a very high load time (of the order of minutes) when using a scene with
many assets. This is because Isaac Sim supports USD paths which are slow and not required.
Added
^^^^^
* Added faster implementation of USD stage traversal methods inside the :class:`omni.isaac.orbit.sim.utils` module.
* Added properties :attr:`omni.isaac.orbit.assets.AssetBase.num_instances` and
:attr:`omni.isaac.orbit.sensor.SensorBase.num_instances` to obtain the number of instances of the asset
or sensor in the simulation respectively.
Removed
^^^^^^^
* Removed dependencies on Isaac Sim view classes. It is no longer possible to use :attr:`root_view` and
:attr:`body_view`. Instead use :attr:`root_physx_view` and :attr:`body_physx_view` to access the underlying
PhysX views.
0.9.55 (2023-12-03)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the Nucleus directory path in the :attr:`omni.isaac.orbit.utils.assets.NVIDIA_NUCLEUS_DIR`.
Earlier, it was referring to the ``NVIDIA/Assets`` directory instead of ``NVIDIA``.
0.9.54 (2023-11-29)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed pose computation in the :class:`omni.isaac.orbit.sensors.Camera` class to obtain them from XFormPrimView
instead of using ``UsdGeomCamera.ComputeLocalToWorldTransform`` method. The latter is not updated correctly
during GPU simulation.
* Fixed initialization of the annotator info in the class :class:`omni.isaac.orbit.sensors.Camera`. Previously
all dicts had the same memory address which caused all annotators to have the same info.
* Fixed the conversion of ``uint32`` warp arrays inside the :meth:`omni.isaac.orbit.utils.array.convert_to_torch`
method. PyTorch does not support this type, so it is converted to ``int32`` before converting to PyTorch tensor.
* Added render call inside :meth:`omni.isaac.orbit.sim.SimulationContext.reset` to initialize Replicator
buffers when the simulation is reset.
0.9.53 (2023-11-29)
~~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Changed the behavior of passing :obj:`None` to the :class:`omni.isaac.orbit.actuators.ActuatorBaseCfg`
class. Earlier, they were resolved to fixed default values. Now, they imply that the values are loaded
from the USD joint drive configuration.
Added
^^^^^
* Added setting of joint armature and friction quantities to the articulation class.
0.9.52 (2023-11-29)
~~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Changed the warning print in :meth:`omni.isaac.orbit.sim.utils.apply_nested` method
to be more descriptive. Earlier, it was printing a warning for every instanced prim.
Now, it only prints a warning if it could not apply the attribute to any of the prims.
Added
^^^^^
* Added the method :meth:`omni.isaac.orbit.utils.assets.retrieve_file_path` to
obtain the absolute path of a file on the Nucleus server or locally.
Fixed
^^^^^
* Fixed hiding of STOP button in the :class:`AppLauncher` class when running the
simulation in headless mode.
* Fixed a bug with :meth:`omni.isaac.orbit.sim.utils.clone` failing when the input prim path
had no parent (example: "/Table").
0.9.51 (2023-11-29)
~~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Changed the :meth:`omni.isaac.orbit.sensor.SensorBase.update` method to always recompute the buffers if
the sensor is in visualization mode.
Added
^^^^^
* Added available entities to the error message when accessing a non-existent entity in the
:class:`InteractiveScene` class.
* Added a warning message when the user tries to reference an invalid prim in the :class:`FrameTransformer` sensor.
0.9.50 (2023-11-28)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Hid the ``STOP`` button in the UI when running standalone Python scripts. This is to prevent
users from accidentally clicking the button and stopping the simulation. They should only be able to
play and pause the simulation from the UI.
Removed
^^^^^^^
* Removed :attr:`omni.isaac.orbit.sim.SimulationCfg.shutdown_app_on_stop`. The simulation is always rendering
if it is stopped from the UI. The user needs to close the window or press ``Ctrl+C`` to close the simulation.
0.9.49 (2023-11-27)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added an interface class, :class:`omni.isaac.orbit.managers.ManagerTermBase`, to serve as the parent class
for term implementations that are functional classes.
* Adapted all managers to support terms that are classes and not just functions clearer. This allows the user to
create more complex terms that require additional state information.
0.9.48 (2023-11-24)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed initialization of drift in the :class:`omni.isaac.orbit.sensors.RayCasterCamera` class.
0.9.47 (2023-11-24)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Automated identification of the root prim in the :class:`omni.isaac.orbit.assets.RigidObject` and
:class:`omni.isaac.orbit.assets.Articulation` classes. Earlier, the root prim was hard-coded to
the spawn prim path. Now, the class searches for the root prim under the spawn prim path.
0.9.46 (2023-11-24)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed a critical issue in the asset classes with writing states into physics handles.
Earlier, the states were written over all the indices instead of the indices of the
asset that were being updated. This caused the physics handles to refresh the states
of all the assets in the scene, which is not desirable.
0.9.45 (2023-11-24)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added :class:`omni.isaac.orbit.command_generators.UniformPoseCommandGenerator` to generate
poses in the asset's root frame by uniformly sampling from a given range.
0.9.44 (2023-11-16)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added methods :meth:`reset` and :meth:`step` to the :class:`omni.isaac.orbit.envs.BaseEnv`. This unifies
the environment interface for simple standalone applications with the class.
0.9.43 (2023-11-16)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Replaced subscription of physics play and stop events in the :class:`omni.isaac.orbit.assets.AssetBase` and
:class:`omni.isaac.orbit.sensors.SensorBase` classes with subscription to time-line play and stop events.
This is to prevent issues in cases where physics first needs to perform mesh cooking and handles are not
available immediately. For instance, with deformable meshes.
0.9.42 (2023-11-16)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed setting of damping values from the configuration for :class:`ActuatorBase` class. Earlier,
the stiffness values were being set into damping when a dictionary configuration was passed to the
actuator model.
* Added dealing with :class:`int` and :class:`float` values in the configurations of :class:`ActuatorBase`.
Earlier, a type-error was thrown when integer values were passed to the actuator model.
0.9.41 (2023-11-16)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the naming and shaping issues in the binary joint action term.
0.9.40 (2023-11-09)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Simplified the manual initialization of Isaac Sim :class:`ArticulationView` class. Earlier, we basically
copied the code from the Isaac Sim source code. Now, we just call their initialize method.
Changed
^^^^^^^
* Changed the name of attribute :attr:`default_root_state_w` to :attr:`default_root_state`. The latter is
more correct since the data is actually in the local environment frame and not the simulation world frame.
0.9.39 (2023-11-08)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Changed the reference of private ``_body_view`` variable inside the :class:`RigidObject` class
to the public ``body_view`` property. For a rigid object, the private variable is not defined.
0.9.38 (2023-11-07)
~~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Upgraded the :class:`omni.isaac.orbit.envs.RLTaskEnv` class to support Gym 0.29.0 environment definition.
Added
^^^^^
* Added computation of ``time_outs`` and ``terminated`` signals inside the termination manager. These follow the
definition mentioned in `Gym 0.29.0 <https://gymnasium.farama.org/tutorials/gymnasium_basics/handling_time_limits/>`_.
* Added proper handling of observation and action spaces in the :class:`omni.isaac.orbit.envs.RLTaskEnv` class.
These now follow closely to how Gym VecEnv handles the spaces.
0.9.37 (2023-11-06)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed broken visualization in :mod:`omni.isaac.orbit.sensors.FrameTramsformer` class by overwriting the
correct ``_debug_vis_callback`` function.
* Moved the visualization marker configurations of sensors to their respective sensor configuration classes.
This allows users to set these configurations from the configuration object itself.
0.9.36 (2023-11-03)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added explicit deleting of different managers in the :class:`omni.isaac.orbit.envs.BaseEnv` and
:class:`omni.isaac.orbit.envs.RLTaskEnv` classes. This is required since deleting the managers
is order-sensitive (many managers need to be deleted before the scene is deleted).
0.9.35 (2023-11-02)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the error: ``'str' object has no attribute '__module__'`` introduced by adding the future import inside the
:mod:`omni.isaac.orbit.utils.warp.kernels` module. Warp language does not support the ``__future__`` imports.
0.9.34 (2023-11-02)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added missing import of ``from __future__ import annotations`` in the :mod:`omni.isaac.orbit.utils.warp`
module. This is needed to have a consistent behavior across Python versions.
0.9.33 (2023-11-02)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the :class:`omni.isaac.orbit.command_generators.NullCommandGenerator` class. Earlier,
it was having a runtime error due to infinity in the resampling time range. Now, the class just
overrides the parent methods to perform no operations.
0.9.32 (2023-11-02)
~~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Renamed the :class:`omni.isaac.orbit.envs.RLEnv` class to :class:`omni.isaac.orbit.envs.RLTaskEnv` to
avoid confusions in terminologies between environments and tasks.
0.9.31 (2023-11-02)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the :class:`omni.isaac.orbit.sensors.RayCasterCamera` class, as a ray-casting based camera for
"distance_to_camera", "distance_to_image_plane" and "normals" annotations. It has the same interface and
functionalities as the USD Camera while it is on average 30% faster.
0.9.30 (2023-11-01)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added skipping of None values in the :class:`InteractiveScene` class when creating the scene from configuration
objects. Earlier, it was throwing an error when the user passed a None value for a scene element.
* Added ``kwargs`` to the :class:`RLEnv` class to allow passing additional arguments from gym registry function.
This is now needed since the registry function passes args beyond the ones specified in the constructor.
0.9.29 (2023-11-01)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the material path resolution inside the :class:`omni.isaac.orbit.sim.converters.UrdfConverter` class.
With Isaac Sim 2023.1, the material paths from the importer are always saved as absolute paths. This caused
issues when the generated USD file was moved to a different location. The fix now resolves the material paths
relative to the USD file location.
0.9.28 (2023-11-01)
~~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Changed the way the :func:`omni.isaac.orbit.sim.spawners.from_files.spawn_ground_plane` function sets the
height of the ground. Earlier, it was reading the height from the configuration object. Now, it expects the
desired transformation as inputs to the function. This makes it consistent with the other spawner functions.
0.9.27 (2023-10-31)
~~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Removed the default value of the argument ``camel_case`` in setters of USD attributes. This is to avoid
confusion with the naming of the attributes in the USD file.
Fixed
^^^^^
* Fixed the selection of material prim in the :class:`omni.isaac.orbit.sim.spawners.materials.spawn_preview_surface`
method. Earlier, the created prim was being selected in the viewport which interfered with the selection of
prims by the user.
* Updated :class:`omni.isaac.orbit.sim.converters.MeshConverter` to use a different stage than the default stage
for the conversion. This is to avoid the issue of the stage being closed when the conversion is done.
0.9.26 (2023-10-31)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the sensor implementation for :class:`omni.isaac.orbit.sensors.FrameTransformer` class. Currently,
it handles obtaining the transformation between two frames in the same articulation.
0.9.25 (2023-10-27)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the :mod:`omni.isaac.orbit.envs.ui` module to put all the UI-related classes in one place. This currently
implements the :class:`omni.isaac.orbit.envs.ui.BaseEnvWindow` and :class:`omni.isaac.orbit.envs.ui.RLEnvWindow`
classes. Users can inherit from these classes to create their own UI windows.
* Added the attribute :attr:`omni.isaac.orbit.envs.BaseEnvCfg.ui_window_class_type` to specify the UI window class
to be used for the environment. This allows the user to specify their own UI window class to be used for the
environment.
0.9.24 (2023-10-27)
~~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Changed the behavior of setting up debug visualization for assets, sensors and command generators.
Earlier it was raising an error if debug visualization was not enabled in the configuration object.
Now it checks whether debug visualization is implemented and only sets up the callback if it is
implemented.
0.9.23 (2023-10-27)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed a typo in the :class:`AssetBase` and :class:`SensorBase` that effected the class destructor.
Earlier, a tuple was being created in the constructor instead of the actual object.
0.9.22 (2023-10-26)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added a :class:`omni.isaac.orbit.command_generators.NullCommandGenerator` class for no command environments.
This is easier to work with than having checks for :obj:`None` in the command generator.
Fixed
^^^^^
* Moved the randomization manager to the :class:`omni.isaac.orbit.envs.BaseEnv` class with the default
settings to reset the scene to the defaults specified in the configurations of assets.
* Moved command generator to the :class:`omni.isaac.orbit.envs.RlEnv` class to have all task-specification
related classes in the same place.
0.9.21 (2023-10-26)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Decreased the priority of callbacks in asset and sensor base classes. This may help in preventing
crashes when warm starting the simulation.
* Fixed no rendering mode when running the environment from the GUI. Earlier the function
:meth:`SimulationContext.set_render_mode` was erroring out.
0.9.20 (2023-10-25)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Changed naming in :class:`omni.isaac.orbit.sim.SimulationContext.RenderMode` to use ``NO_GUI_OR_RENDERING``
and ``NO_RENDERING`` instead of ``HEADLESS`` for clarity.
* Changed :class:`omni.isaac.orbit.sim.SimulationContext` to be capable of handling livestreaming and
offscreen rendering.
* Changed :class:`omni.isaac.orbit.app.AppLauncher` envvar ``VIEWPORT_RECORD`` to the more descriptive
``OFFSCREEN_RENDER``.
0.9.19 (2023-10-25)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added Gym observation and action spaces for the :class:`omni.isaac.orbit.envs.RLEnv` class.
0.9.18 (2023-10-23)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Created :class:`omni.issac.orbit.sim.converters.asset_converter.AssetConverter` to serve as a base
class for all asset converters.
* Added :class:`omni.issac.orbit.sim.converters.mesh_converter.MeshConverter` to handle loading and conversion
of mesh files (OBJ, STL and FBX) into USD format.
* Added script ``convert_mesh.py`` to ``source/tools`` to allow users to convert a mesh to USD via command line arguments.
Changed
^^^^^^^
* Renamed the submodule :mod:`omni.isaac.orbit.sim.loaders` to :mod:`omni.isaac.orbit.sim.converters` to be more
general with the functionality of the module.
* Updated ``check_instanceable.py`` script to convert relative paths to absolute paths.
0.9.17 (2023-10-22)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added setters and getters for term configurations in the :class:`RandomizationManager`, :class:`RewardManager`
and :class:`TerminationManager` classes. This allows the user to modify the term configurations after the
manager has been created.
* Added the method :meth:`compute_group` to the :class:`omni.isaac.orbit.managers.ObservationManager` class to
compute the observations for only a given group.
* Added the curriculum term for modifying reward weights after certain environment steps.
0.9.16 (2023-10-22)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added support for keyword arguments for terms in the :class:`omni.isaac.orbit.managers.ManagerBase`.
Fixed
^^^^^
* Fixed resetting of buffers in the :class:`TerminationManager` class. Earlier, the values were being set
to ``0.0`` instead of ``False``.
0.9.15 (2023-10-22)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added base yaw heading and body acceleration into :class:`omni.isaac.orbit.assets.RigidObjectData` class.
These quantities are computed inside the :class:`RigidObject` class.
Fixed
^^^^^
* Fixed the :meth:`omni.isaac.orbit.assets.RigidObject.set_external_force_and_torque` method to correctly
deal with the body indices.
* Fixed a bug in the :meth:`omni.isaac.orbit.utils.math.wrap_to_pi` method to prevent self-assignment of
the input tensor.
0.9.14 (2023-10-21)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added 2-D drift (i.e. along x and y) to the :class:`omni.isaac.orbit.sensors.RayCaster` class.
* Added flags to the :class:`omni.isaac.orbit.sensors.ContactSensorCfg` to optionally obtain the
sensor origin and air time information. Since these are not required by default, they are
disabled by default.
Fixed
^^^^^
* Fixed the handling of contact sensor history buffer in the :class:`omni.isaac.orbit.sensors.ContactSensor` class.
Earlier, the buffer was not being updated correctly.
0.9.13 (2023-10-20)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the issue with double :obj:`Ellipsis` when indexing tensors with multiple dimensions.
The fix now uses :obj:`slice(None)` instead of :obj:`Ellipsis` to index the tensors.
0.9.12 (2023-10-18)
~~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed bugs in actuator model implementation for actuator nets. Earlier the DC motor clipping was not working.
* Fixed bug in applying actuator model in the :class:`omni.isaac.orbit.asset.Articulation` class. The new
implementation caches the outputs from explicit actuator model into the ``joint_pos_*_sim`` buffer to
avoid feedback loops in the tensor operation.
0.9.11 (2023-10-17)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the support for semantic tags into the :class:`omni.isaac.orbit.sim.spawner.SpawnerCfg` class. This allows
the user to specify the semantic tags for a prim when spawning it into the scene. It follows the same format as
Omniverse Replicator.
0.9.10 (2023-10-16)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added ``--livestream`` and ``--ros`` CLI args to :class:`omni.isaac.orbit.app.AppLauncher` class.
* Added a static function :meth:`omni.isaac.orbit.app.AppLauncher.add_app_launcher_args`, which
appends the arguments needed for :class:`omni.isaac.orbit.app.AppLauncher` to the argument parser.
Changed
^^^^^^^
* Within :class:`omni.isaac.orbit.app.AppLauncher`, removed ``REMOTE_DEPLOYMENT`` env-var processing
in the favor of ``HEADLESS`` and ``LIVESTREAM`` env-vars. These have clearer uses and better parity
with the CLI args.
0.9.9 (2023-10-12)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the property :attr:`omni.isaac.orbit.assets.Articulation.is_fixed_base` to the articulation class to
check if the base of the articulation is fixed or floating.
* Added the task-space action term corresponding to the differential inverse-kinematics controller.
Fixed
^^^^^
* Simplified the :class:`omni.isaac.orbit.controllers.DifferentialIKController` to assume that user provides the
correct end-effector poses and Jacobians. Earlier it was doing internal frame transformations which made the
code more complicated and error-prone.
0.9.8 (2023-09-30)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the boundedness of class objects that register callbacks into the simulator.
These include devices, :class:`AssetBase`, :class:`SensorBase` and :class:`CommandGenerator`.
The fix ensures that object gets deleted when the user deletes the object.
0.9.7 (2023-09-26)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Modified the :class:`omni.isaac.orbit.markers.VisualizationMarkers` to use the
:class:`omni.isaac.orbit.sim.spawner.SpawnerCfg` class instead of their
own configuration objects. This makes it consistent with the other ways to spawn assets in the scene.
Added
^^^^^
* Added the method :meth:`copy` to configclass to allow copying of configuration objects.
0.9.6 (2023-09-26)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Changed class-level configuration classes to refer to class types using ``class_type`` attribute instead
of ``cls`` or ``cls_name``.
0.9.5 (2023-09-25)
~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Added future import of ``annotations`` to have a consistent behavior across Python versions.
* Removed the type-hinting from docstrings to simplify maintenance of the documentation. All type-hints are
now in the code itself.
0.9.4 (2023-08-29)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added :class:`omni.isaac.orbit.scene.InteractiveScene`, as the central scene unit that contains all entities
that are part of the simulation. These include the terrain, sensors, articulations, rigid objects etc.
The scene groups the common operations of these entities and allows to access them via their unique names.
* Added :mod:`omni.isaac.orbit.envs` module that contains environment definitions that encapsulate the different
general (scene, action manager, observation manager) and RL-specific (reward and termination manager) managers.
* Added :class:`omni.isaac.orbit.managers.SceneEntityCfg` to handle which scene elements are required by the
manager's terms. This allows the manager to parse useful information from the scene elements, such as the
joint and body indices, and pass them to the term.
* Added :class:`omni.isaac.orbit.sim.SimulationContext.RenderMode` to handle different rendering modes based on
what the user wants to update (viewport, cameras, or UI elements).
Fixed
^^^^^
* Fixed the :class:`omni.isaac.orbit.command_generators.CommandGeneratorBase` to register a debug visualization
callback similar to how sensors and robots handle visualization.
0.9.3 (2023-08-23)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Enabled the `faulthander <https://docs.python.org/3/library/faulthandler.html>`_ to catch segfaults and print
the stack trace. This is enabled by default in the :class:`omni.isaac.orbit.app.AppLauncher` class.
Fixed
^^^^^
* Re-added the :mod:`omni.isaac.orbit.utils.kit` to the ``compat`` directory and fixed all the references to it.
* Fixed the deletion of Replicator nodes for the :class:`omni.isaac.orbit.sensors.Camera` class. Earlier, the
Replicator nodes were not being deleted when the camera was deleted. However, this does not prevent the random
crashes that happen when the camera is deleted.
* Fixed the :meth:`omni.isaac.orbit.utils.math.convert_quat` to support both numpy and torch tensors.
Changed
^^^^^^^
* Renamed all the scripts inside the ``test`` directory to follow the convention:
* ``test_<module_name>.py``: Tests for the module ``<module_name>`` using unittest.
* ``check_<module_name>``: Check for the module ``<module_name>`` using python main function.
0.9.2 (2023-08-22)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the ability to color meshes in the :class:`omni.isaac.orbit.terrain.TerrainGenerator` class. Currently,
it only supports coloring the mesh randomly (``"random"``), based on the terrain height (``"height"``), and
no coloring (``"none"``).
Fixed
^^^^^
* Modified the :class:`omni.isaac.orbit.terrain.TerrainImporter` class to configure visual and physics materials
based on the configuration object.
0.9.1 (2023-08-18)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Introduced three different rotation conventions in the :class:`omni.isaac.orbit.sensors.Camera` class. These
conventions are:
* ``opengl``: the camera is looking down the -Z axis with the +Y axis pointing up
* ``ros``: the camera is looking down the +Z axis with the +Y axis pointing down
* ``world``: the camera is looking along the +X axis with the -Z axis pointing down
These can be used to declare the camera offset in :class:`omni.isaac.orbit.sensors.CameraCfg.OffsetCfg` class
and in :meth:`omni.isaac.orbit.sensors.Camera.set_world_pose` method. Additionally, all conventions are
saved to :class:`omni.isaac.orbit.sensors.CameraData` class for easy access.
Changed
^^^^^^^
* Adapted all the sensor classes to follow a structure similar to the :class:`omni.issac.orbit.assets.AssetBase`.
Hence, the spawning and initialization of sensors manually by the users is avoided.
* Removed the :meth:`debug_vis` function since that this functionality is handled by a render callback automatically
(based on the passed configuration for the :class:`omni.isaac.orbit.sensors.SensorBaseCfg.debug_vis` flag).
0.9.0 (2023-08-18)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Introduces a new set of asset interfaces. These interfaces simplify the spawning of assets into the scene
and initializing the physics handle by putting that inside post-startup physics callbacks. With this, users
no longer need to worry about the :meth:`spawn` and :meth:`initialize` calls.
* Added utility methods to :mod:`omni.isaac.orbit.utils.string` module that resolve regex expressions based
on passed list of target keys.
Changed
^^^^^^^
* Renamed all references of joints in an articulation from "dof" to "joint". This makes it consistent with the
terminology used in robotics.
Deprecated
^^^^^^^^^^
* Removed the previous modules for objects and robots. Instead the :class:`Articulation` and :class:`RigidObject`
should be used.
0.8.12 (2023-08-18)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added other properties provided by ``PhysicsScene`` to the :class:`omni.isaac.orbit.sim.SimulationContext`
class to allow setting CCD, solver iterations, etc.
* Added commonly used functions to the :class:`SimulationContext` class itself to avoid having additional
imports from Isaac Sim when doing simple tasks such as setting camera view or retrieving the simulation settings.
Fixed
^^^^^
* Switched the notations of default buffer values in :class:`omni.isaac.orbit.sim.PhysxCfg` from multiplication
to scientific notation to avoid confusion with the values.
0.8.11 (2023-08-18)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Adds utility functions and configuration objects in the :mod:`omni.isaac.orbit.sim.spawners`
to create the following prims in the scene:
* :mod:`omni.isaac.orbit.sim.spawners.from_file`: Create a prim from a USD/URDF file.
* :mod:`omni.isaac.orbit.sim.spawners.shapes`: Create USDGeom prims for shapes (box, sphere, cylinder, capsule, etc.).
* :mod:`omni.isaac.orbit.sim.spawners.materials`: Create a visual or physics material prim.
* :mod:`omni.isaac.orbit.sim.spawners.lights`: Create a USDLux prim for different types of lights.
* :mod:`omni.isaac.orbit.sim.spawners.sensors`: Create a USD prim for supported sensors.
Changed
^^^^^^^
* Modified the :class:`SimulationContext` class to take the default physics material using the material spawn
configuration object.
0.8.10 (2023-08-17)
~~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added methods for defining different physics-based schemas in the :mod:`omni.isaac.orbit.sim.schemas` module.
These methods allow creating the schema if it doesn't exist at the specified prim path and modify
its properties based on the configuration object.
0.8.9 (2023-08-09)
~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Moved the :class:`omni.isaac.orbit.asset_loader.UrdfLoader` class to the :mod:`omni.isaac.orbit.sim.loaders`
module to make it more accessible to the user.
0.8.8 (2023-08-09)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added configuration classes and functions for setting different physics-based schemas in the
:mod:`omni.isaac.orbit.sim.schemas` module. These allow modifying properties of the physics solver
on the asset using configuration objects.
0.8.7 (2023-08-03)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added support for `__post_init__ <https://docs.python.org/3/library/dataclasses.html#post-init-processing>`_ in
the :class:`omni.isaac.orbit.utils.configclass` decorator.
0.8.6 (2023-08-03)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added support for callable classes in the :class:`omni.isaac.orbit.managers.ManagerBase`.
0.8.5 (2023-08-03)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the :class:`omni.isaac.orbit.markers.Visualizationmarkers` class so that the markers are not visible in camera rendering mode.
Changed
^^^^^^^
* Simplified the creation of the point instancer in the :class:`omni.isaac.orbit.markers.Visualizationmarkers` class. It now creates a new
prim at the next available prim path if a prim already exists at the given path.
0.8.4 (2023-08-02)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the :class:`omni.isaac.orbit.sim.SimulationContext` class to the :mod:`omni.isaac.orbit.sim` module.
This class inherits from the :class:`omni.isaac.core.simulation_context.SimulationContext` class and adds
the ability to create a simulation context from a configuration object.
0.8.3 (2023-08-02)
~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Moved the :class:`ActuatorBase` class to the :mod:`omni.isaac.orbit.actuators.actuator_base` module.
* Renamed the :mod:`omni.isaac.orbit.actuators.actuator` module to :mod:`omni.isaac.orbit.actuators.actuator_pd`
to make it more explicit that it contains the PD actuator models.
0.8.2 (2023-08-02)
~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Cleaned up the :class:`omni.isaac.orbit.terrain.TerrainImporter` class to take all the parameters from the configuration
object. This makes it consistent with the other classes in the package.
* Moved the configuration classes for terrain generator and terrain importer into separate files to resolve circular
dependency issues.
0.8.1 (2023-08-02)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added a hack into :class:`omni.isaac.orbit.app.AppLauncher` class to remove orbit packages from the path before launching
the simulation application. This prevents the warning messages that appears when the user launches the ``SimulationApp``.
Added
^^^^^
* Enabled necessary viewport extensions in the :class:`omni.isaac.orbit.app.AppLauncher` class itself if ``VIEWPORT_ENABLED``
flag is true.
0.8.0 (2023-07-26)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the :class:`ActionManager` class to the :mod:`omni.isaac.orbit.managers` module to handle actions in the
environment through action terms.
* Added contact force history to the :class:`omni.isaac.orbit.sensors.ContactSensor` class. The history is stored
in the ``net_forces_w_history`` attribute of the sensor data.
Changed
^^^^^^^
* Implemented lazy update of buffers in the :class:`omni.isaac.orbit.sensors.SensorBase` class. This allows the user
to update the sensor data only when required, i.e. when the data is requested by the user. This helps avoid double
computation of sensor data when a reset is called in the environment.
Deprecated
^^^^^^^^^^
* Removed the support for different backends in the sensor class. We only use Pytorch as the backend now.
* Removed the concept of actuator groups. They are now handled by the :class:`omni.isaac.orbit.managers.ActionManager`
class. The actuator models are now directly handled by the robot class itself.
0.7.4 (2023-07-26)
~~~~~~~~~~~~~~~~~~
Changed
^^^^^^^
* Changed the behavior of the :class:`omni.isaac.orbit.terrains.TerrainImporter` class. It now expects the terrain
type to be specified in the configuration object. This allows the user to specify everything in the configuration
object and not have to do an explicit call to import a terrain.
Fixed
^^^^^
* Fixed setting of quaternion orientations inside the :class:`omni.isaac.orbit.markers.Visualizationmarkers` class.
Earlier, the orientation was being set into the point instancer in the wrong order (``wxyz`` instead of ``xyzw``).
0.7.3 (2023-07-25)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the issue with multiple inheritance in the :class:`omni.isaac.orbit.utils.configclass` decorator.
Earlier, if the inheritance tree was more than one level deep and the lowest level configuration class was
not updating its values from the middle level classes.
0.7.2 (2023-07-24)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the method :meth:`replace` to the :class:`omni.isaac.orbit.utils.configclass` decorator to allow
creating a new configuration object with values replaced from keyword arguments. This function internally
calls the `dataclasses.replace <https://docs.python.org/3/library/dataclasses.html#dataclasses.replace>`_.
Fixed
^^^^^
* Fixed the handling of class types as member values in the :meth:`omni.isaac.orbit.utils.configclass`. Earlier it was
throwing an error since class types were skipped in the if-else block.
0.7.1 (2023-07-22)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the :class:`TerminationManager`, :class:`CurriculumManager`, and :class:`RandomizationManager` classes
to the :mod:`omni.isaac.orbit.managers` module to handle termination, curriculum, and randomization respectively.
0.7.0 (2023-07-22)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Created a new :mod:`omni.isaac.orbit.managers` module for all the managers related to the environment / scene.
This includes the :class:`omni.isaac.orbit.managers.ObservationManager` and :class:`omni.isaac.orbit.managers.RewardManager`
classes that were previously in the :mod:`omni.isaac.orbit.utils.mdp` module.
* Added the :class:`omni.isaac.orbit.managers.ManagerBase` class to handle the creation of managers.
* Added configuration classes for :class:`ObservationTermCfg` and :class:`RewardTermCfg` to allow easy creation of
observation and reward terms.
Changed
^^^^^^^
* Changed the behavior of :class:`ObservationManager` and :class:`RewardManager` classes to accept the key ``func``
in each configuration term to be a callable. This removes the need to inherit from the base class
and allows more reusability of the functions across different environments.
* Moved the old managers to the :mod:`omni.isaac.orbit.compat.utils.mdp` module.
* Modified the necessary scripts to use the :mod:`omni.isaac.orbit.compat.utils.mdp` module.
0.6.2 (2023-07-21)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the :mod:`omni.isaac.orbit.command_generators` to generate different commands based on the desired task.
It allows the user to generate commands for different tasks in the same environment without having to write
custom code for each task.
0.6.1 (2023-07-16)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the :meth:`omni.isaac.orbit.utils.math.quat_apply_yaw` to compute the yaw quaternion correctly.
Added
^^^^^
* Added functions to convert string and callable objects in :mod:`omni.isaac.orbit.utils.string`.
0.6.0 (2023-07-16)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the argument :attr:`sort_keys` to the :meth:`omni.isaac.orbit.utils.io.yaml.dump_yaml` method to allow
enabling/disabling of sorting of keys in the output yaml file.
Fixed
^^^^^
* Fixed the ordering of terms in :mod:`omni.isaac.orbit.utils.configclass` to be consistent in the order in which
they are defined. Previously, the ordering was done alphabetically which made it inconsistent with the order in which
the parameters were defined.
Changed
^^^^^^^
* Changed the default value of the argument :attr:`sort_keys` in the :meth:`omni.isaac.orbit.utils.io.yaml.dump_yaml`
method to ``False``.
* Moved the old config classes in :mod:`omni.isaac.orbit.utils.configclass` to
:mod:`omni.isaac.orbit.compat.utils.configclass` so that users can still run their old code where alphabetical
ordering was used.
0.5.0 (2023-07-04)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added a generalized :class:`omni.isaac.orbit.sensors.SensorBase` class that leverages the ideas of views to
handle multiple sensors in a single class.
* Added the classes :class:`omni.isaac.orbit.sensors.RayCaster`, :class:`omni.isaac.orbit.sensors.ContactSensor`,
and :class:`omni.isaac.orbit.sensors.Camera` that output a batched tensor of sensor data.
Changed
^^^^^^^
* Renamed the parameter ``sensor_tick`` to ``update_freq`` to make it more intuitive.
* Moved the old sensors in :mod:`omni.isaac.orbit.sensors` to :mod:`omni.isaac.orbit.compat.sensors`.
* Modified the standalone scripts to use the :mod:`omni.isaac.orbit.compat.sensors` module.
0.4.4 (2023-07-05)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the :meth:`omni.isaac.orbit.terrains.trimesh.utils.make_plane` method to handle the case when the
plane origin does not need to be centered.
* Added the :attr:`omni.isaac.orbit.terrains.TerrainGeneratorCfg.seed` to make generation of terrains reproducible.
The default value is ``None`` which means that the seed is not set.
Changed
^^^^^^^
* Changed the saving of ``origins`` in :class:`omni.isaac.orbit.terrains.TerrainGenerator` class to be in CSV format
instead of NPY format.
0.4.3 (2023-06-28)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the :class:`omni.isaac.orbit.markers.PointInstancerMarker` class that wraps around
`UsdGeom.PointInstancer <https://graphics.pixar.com/usd/dev/api/class_usd_geom_point_instancer.html>`_
to directly work with torch and numpy arrays.
Changed
^^^^^^^
* Moved the old markers in :mod:`omni.isaac.orbit.markers` to :mod:`omni.isaac.orbit.compat.markers`.
* Modified the standalone scripts to use the :mod:`omni.isaac.orbit.compat.markers` module.
0.4.2 (2023-06-28)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the sub-module :mod:`omni.isaac.orbit.terrains` to allow procedural generation of terrains and supporting
importing of terrains from different sources (meshes, usd files or default ground plane).
0.4.1 (2023-06-27)
~~~~~~~~~~~~~~~~~~
* Added the :class:`omni.isaac.orbit.app.AppLauncher` class to allow controlled instantiation of
the `SimulationApp <https://docs.omniverse.nvidia.com/py/isaacsim/source/extensions/omni.isaac.kit/docs/index.html>`_
and extension loading for remote deployment and ROS bridges.
Changed
^^^^^^^
* Modified all standalone scripts to use the :class:`omni.isaac.orbit.app.AppLauncher` class.
0.4.0 (2023-05-27)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added a helper class :class:`omni.isaac.orbit.asset_loader.UrdfLoader` that converts a URDF file to instanceable USD
file based on the input configuration object.
0.3.2 (2023-04-27)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added safe-printing of functions while using the :meth:`omni.isaac.orbit.utils.dict.print_dict` function.
0.3.1 (2023-04-23)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added a modified version of ``lula_franka_gen.urdf`` which includes an end-effector frame.
* Added a standalone script ``play_rmpflow.py`` to show RMPFlow controller.
Fixed
^^^^^
* Fixed the splitting of commands in the :meth:`ActuatorGroup.compute` method. Earlier it was reshaping the
commands to the shape ``(num_actuators, num_commands)`` which was causing the commands to be split incorrectly.
* Fixed the processing of actuator command in the :meth:`RobotBase._process_actuators_cfg` to deal with multiple
command types when using "implicit" actuator group.
0.3.0 (2023-04-20)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Added the destructor to the keyboard devices to unsubscribe from carb.
Added
^^^^^
* Added the :class:`Se2Gamepad` and :class:`Se3Gamepad` for gamepad teleoperation support.
0.2.8 (2023-04-10)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed bugs in :meth:`axis_angle_from_quat` in the ``omni.isaac.orbit.utils.math`` to handle quaternion with negative w component.
* Fixed bugs in :meth:`subtract_frame_transforms` in the ``omni.isaac.orbit.utils.math`` by adding the missing final rotation.
0.2.7 (2023-04-07)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed repetition in applying mimic multiplier for "p_abs" in the :class:`GripperActuatorGroup` class.
* Fixed bugs in :meth:`reset_buffers` in the :class:`RobotBase` and :class:`LeggedRobot` classes.
0.2.6 (2023-03-16)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added the :class:`CollisionPropertiesCfg` to rigid/articulated object and robot base classes.
* Added the :class:`PhysicsMaterialCfg` to the :class:`SingleArm` class for tool sites.
Changed
^^^^^^^
* Changed the default control mode of the :obj:`PANDA_HAND_MIMIC_GROUP_CFG` to be from ``"v_abs"`` to ``"p_abs"``.
Using velocity control for the mimic group can cause the hand to move in a jerky manner.
0.2.5 (2023-03-08)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the indices used for the Jacobian and dynamics quantities in the :class:`MobileManipulator` class.
0.2.4 (2023-03-04)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added :meth:`apply_nested_physics_material` to the ``omni.isaac.orbit.utils.kit``.
* Added the :meth:`sample_cylinder` to sample points from a cylinder's surface.
* Added documentation about the issue in using instanceable asset as markers.
Fixed
^^^^^
* Simplified the physics material application in the rigid object and legged robot classes.
Removed
^^^^^^^
* Removed the ``geom_prim_rel_path`` argument in the :class:`RigidObjectCfg.MetaInfoCfg` class.
0.2.3 (2023-02-24)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the end-effector body index used for getting the Jacobian in the :class:`SingleArm` and :class:`MobileManipulator` classes.
0.2.2 (2023-01-27)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the :meth:`set_world_pose_ros` and :meth:`set_world_pose_from_view` in the :class:`Camera` class.
Deprecated
^^^^^^^^^^
* Removed the :meth:`set_world_pose_from_ypr` method from the :class:`Camera` class.
0.2.1 (2023-01-26)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed the :class:`Camera` class to support different fisheye projection types.
0.2.0 (2023-01-25)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added support for warp backend in camera utilities.
* Extended the ``play_camera.py`` with ``--gpu`` flag to use GPU replicator backend.
0.1.1 (2023-01-24)
~~~~~~~~~~~~~~~~~~
Fixed
^^^^^
* Fixed setting of physics material on the ground plane when using :meth:`omni.isaac.orbit.utils.kit.create_ground_plane` function.
0.1.0 (2023-01-17)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Initial release of the extension with experimental API.
* Available robot configurations:
* **Quadrupeds:** Unitree A1, ANYmal B, ANYmal C
* **Single-arm manipulators:** Franka Emika arm, UR5
* **Mobile manipulators:** Clearpath Ridgeback with Franka Emika arm or UR5
| 63,088 | reStructuredText | 29.820225 | 191 | 0.713337 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit/docs/README.md | # Orbit: Framework
Orbit includes its own set of interfaces and wrappers around Isaac Sim classes. One of the main goals behind this
decision is to have a unified description for different systems. While isaac Sim tries to be general for a wider
variety of simulation requires, our goal has been to specialize these for learning requirements. These include
features such as augmenting simulators with non-ideal actuator models, managing different observation and reward
settings, integrate different sensors, as well as provide interfaces to features that are currently not available in
Isaac Sim but are available from the physics side (such as deformable bodies).
We recommend the users to try out the demo scripts present in `standalone/demos` that display how different parts
of the framework can be integrated together.
| 827 | Markdown | 67.999994 | 116 | 0.822249 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/config/extension.toml | [package]
# Semantic Versioning is used: https://semver.org/
version = "0.1.2"
# Description
title = "ORBIT Assets"
description="Extension containing configuration instances of different assets and sensors"
readme = "docs/README.md"
repository = "https://github.com/NVIDIA-Omniverse/Orbit"
category = "robotics"
keywords = ["kit", "robotics", "assets", "orbit"]
[dependencies]
"omni.isaac.orbit" = {}
# Main python module this extension provides.
[[python.module]]
name = "omni.isaac.orbit_assets"
| 503 | TOML | 25.526314 | 90 | 0.73161 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/unitree.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for Unitree robots.
The following configurations are available:
* :obj:`UNITREE_A1_CFG`: Unitree A1 robot with DC motor model for the legs
* :obj:`UNITREE_GO1_CFG`: Unitree Go1 robot with actuator net model for the legs
* :obj:`UNITREE_GO2_CFG`: Unitree Go2 robot with DC motor model for the legs
Reference: https://github.com/unitreerobotics/unitree_ros
"""
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ActuatorNetMLPCfg, DCMotorCfg
from omni.isaac.orbit.assets.articulation import ArticulationCfg
from omni.isaac.orbit.utils.assets import ISAAC_ORBIT_NUCLEUS_DIR
##
# Configuration - Actuators.
##
GO1_ACTUATOR_CFG = ActuatorNetMLPCfg(
joint_names_expr=[".*_hip_joint", ".*_thigh_joint", ".*_calf_joint"],
network_file=f"{ISAAC_ORBIT_NUCLEUS_DIR}/ActuatorNets/Unitree/unitree_go1.pt",
pos_scale=-1.0,
vel_scale=1.0,
torque_scale=1.0,
input_order="pos_vel",
input_idx=[0, 1, 2],
effort_limit=23.7, # taken from spec sheet
velocity_limit=30.0, # taken from spec sheet
saturation_effort=23.7, # same as effort limit
)
"""Configuration of Go1 actuators using MLP model.
Actuator specifications: https://shop.unitree.com/products/go1-motor
This model is taken from: https://github.com/Improbable-AI/walk-these-ways
"""
##
# Configuration
##
UNITREE_A1_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/Unitree/A1/a1.usd",
activate_contact_sensors=True,
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
retain_accelerations=False,
linear_damping=0.0,
angular_damping=0.0,
max_linear_velocity=1000.0,
max_angular_velocity=1000.0,
max_depenetration_velocity=1.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=False, solver_position_iteration_count=4, solver_velocity_iteration_count=0
),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.42),
joint_pos={
".*L_hip_joint": 0.1,
".*R_hip_joint": -0.1,
"F[L,R]_thigh_joint": 0.8,
"R[L,R]_thigh_joint": 1.0,
".*_calf_joint": -1.5,
},
joint_vel={".*": 0.0},
),
soft_joint_pos_limit_factor=0.9,
actuators={
"base_legs": DCMotorCfg(
joint_names_expr=[".*_hip_joint", ".*_thigh_joint", ".*_calf_joint"],
effort_limit=33.5,
saturation_effort=33.5,
velocity_limit=21.0,
stiffness=25.0,
damping=0.5,
friction=0.0,
),
},
)
"""Configuration of Unitree A1 using DC motor.
Note: Specifications taken from: https://www.trossenrobotics.com/a1-quadruped#specifications
"""
UNITREE_GO1_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/Unitree/Go1/go1.usd",
activate_contact_sensors=True,
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
retain_accelerations=False,
linear_damping=0.0,
angular_damping=0.0,
max_linear_velocity=1000.0,
max_angular_velocity=1000.0,
max_depenetration_velocity=1.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=False, solver_position_iteration_count=4, solver_velocity_iteration_count=0
),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.4),
joint_pos={
".*L_hip_joint": 0.1,
".*R_hip_joint": -0.1,
"F[L,R]_thigh_joint": 0.8,
"R[L,R]_thigh_joint": 1.0,
".*_calf_joint": -1.5,
},
joint_vel={".*": 0.0},
),
soft_joint_pos_limit_factor=0.9,
actuators={
"base_legs": GO1_ACTUATOR_CFG,
},
)
"""Configuration of Unitree Go1 using MLP-based actuator model."""
UNITREE_GO2_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/Unitree/Go2/go2.usd",
activate_contact_sensors=True,
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
retain_accelerations=False,
linear_damping=0.0,
angular_damping=0.0,
max_linear_velocity=1000.0,
max_angular_velocity=1000.0,
max_depenetration_velocity=1.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=False, solver_position_iteration_count=4, solver_velocity_iteration_count=0
),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.4),
joint_pos={
".*L_hip_joint": 0.1,
".*R_hip_joint": -0.1,
"F[L,R]_thigh_joint": 0.8,
"R[L,R]_thigh_joint": 1.0,
".*_calf_joint": -1.5,
},
joint_vel={".*": 0.0},
),
soft_joint_pos_limit_factor=0.9,
actuators={
"base_legs": DCMotorCfg(
joint_names_expr=[".*_hip_joint", ".*_thigh_joint", ".*_calf_joint"],
effort_limit=23.5,
saturation_effort=23.5,
velocity_limit=30.0,
stiffness=25.0,
damping=0.5,
friction=0.0,
),
},
)
"""Configuration of Unitree Go2 using DC-Motor actuator model."""
| 5,691 | Python | 31.340909 | 111 | 0.601652 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/shadow_hand.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for the dexterous hand from Shadow Robot.
The following configurations are available:
* :obj:`SHADOW_HAND_CFG`: Shadow Hand with implicit actuator model.
Reference:
* https://www.shadowrobot.com/dexterous-hand-series/
"""
from __future__ import annotations
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators.actuator_cfg import ImplicitActuatorCfg
from omni.isaac.orbit.assets.articulation import ArticulationCfg
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
##
# Configuration
##
SHADOW_HAND_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Robots/ShadowHand/shadow_hand_instanceable.usd",
activate_contact_sensors=False,
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=True,
retain_accelerations=True,
max_depenetration_velocity=1000.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True,
solver_position_iteration_count=8,
solver_velocity_iteration_count=0,
sleep_threshold=0.005,
stabilization_threshold=0.0005,
),
# collision_props=sim_utils.CollisionPropertiesCfg(contact_offset=0.005, rest_offset=0.0),
joint_drive_props=sim_utils.JointDrivePropertiesCfg(drive_type="force"),
fixed_tendons_props=sim_utils.FixedTendonPropertiesCfg(limit_stiffness=30.0, damping=0.1),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.5),
rot=(0.0, 0.0, -0.7071, 0.7071),
joint_pos={".*": 0.0},
),
actuators={
"fingers": ImplicitActuatorCfg(
joint_names_expr=["robot0_WR.*", "robot0_(FF|MF|RF|LF|TH)J(3|2|1)", "robot0_(LF|TH)J4", "robot0_THJ0"],
effort_limit={
"robot0_WRJ1": 4.785,
"robot0_WRJ0": 2.175,
"robot0_(FF|MF|RF|LF)J1": 0.7245,
"robot0_FFJ(3|2)": 0.9,
"robot0_MFJ(3|2)": 0.9,
"robot0_RFJ(3|2)": 0.9,
"robot0_LFJ(4|3|2)": 0.9,
"robot0_THJ4": 2.3722,
"robot0_THJ3": 1.45,
"robot0_THJ(2|1)": 0.99,
"robot0_THJ0": 0.81,
},
stiffness={
"robot0_WRJ.*": 5.0,
"robot0_(FF|MF|RF|LF|TH)J(3|2|1)": 1.0,
"robot0_(LF|TH)J4": 1.0,
"robot0_THJ0": 1.0,
},
damping={
"robot0_WRJ.*": 0.5,
"robot0_(FF|MF|RF|LF|TH)J(3|2|1)": 0.1,
"robot0_(LF|TH)J4": 0.1,
"robot0_THJ0": 0.1,
},
),
},
soft_joint_pos_limit_factor=1.0,
)
"""Configuration of Shadow Hand robot."""
| 2,953 | Python | 32.954023 | 115 | 0.573315 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/sawyer.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for the Rethink Robotics arms.
The following configuration parameters are available:
* :obj:`SAWYER_CFG`: The Sawyer arm without any tool attached.
Reference: https://github.com/RethinkRobotics/sawyer_robot
"""
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.assets.articulation import ArticulationCfg
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
##
# Configuration
##
SAWYER_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Robots/RethinkRobotics/sawyer_instanceable.usd",
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
max_depenetration_velocity=5.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True, solver_position_iteration_count=8, solver_velocity_iteration_count=0
),
activate_contact_sensors=False,
),
init_state=ArticulationCfg.InitialStateCfg(
joint_pos={
"head_pan": 0.0,
"right_j0": 0.0,
"right_j1": -0.785,
"right_j2": 0.0,
"right_j3": 1.05,
"right_j4": 0.0,
"right_j5": 1.3,
"right_j6": 0.0,
},
),
actuators={
"head": ImplicitActuatorCfg(
joint_names_expr=["head_pan"],
velocity_limit=100.0,
effort_limit=8.0,
stiffness=800.0,
damping=40.0,
),
"arm": ImplicitActuatorCfg(
joint_names_expr=["right_j[0-6]"],
velocity_limit=100.0,
effort_limit={
"right_j[0-1]": 80.0,
"right_j[2-3]": 40.0,
"right_j[4-6]": 9.0,
},
stiffness=100.0,
damping=4.0,
),
},
)
"""Configuration of Rethink Robotics Sawyer arm."""
| 2,083 | Python | 27.944444 | 110 | 0.590494 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/__init__.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES, ETH Zurich, and University of Toronto
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Package containing asset and sensor configurations."""
import os
import toml
# Conveniences to other module directories via relative paths
ORBIT_ASSETS_EXT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../../"))
"""Path to the extension source directory."""
ORBIT_ASSETS_DATA_DIR = os.path.join(ORBIT_ASSETS_EXT_DIR, "data")
"""Path to the extension data directory."""
ORBIT_ASSETS_METADATA = toml.load(os.path.join(ORBIT_ASSETS_EXT_DIR, "config", "extension.toml"))
"""Extension metadata dictionary parsed from the extension.toml file."""
# Configure the module-level variables
__version__ = ORBIT_ASSETS_METADATA["package"]["version"]
##
# Configuration for different assets.
##
from .allegro import *
from .anymal import *
from .cartpole import *
from .franka import *
from .kinova import *
from .ridgeback_franka import *
from .sawyer import *
from .shadow_hand import *
from .unitree import *
from .universal_robots import *
| 1,243 | Python | 27.272727 | 97 | 0.736122 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/ridgeback_franka.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for the Ridgeback-Manipulation robots.
The following configurations are available:
* :obj:`RIDGEBACK_FRANKA_PANDA_CFG`: Clearpath Ridgeback base with Franka Emika arm
Reference: https://github.com/ridgeback/ridgeback_manipulation
"""
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.assets.articulation import ArticulationCfg
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
RIDGEBACK_FRANKA_PANDA_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Robots/Clearpath/RidgebackFranka/ridgeback_franka.usd",
activate_contact_sensors=False,
),
init_state=ArticulationCfg.InitialStateCfg(
joint_pos={
# base
"dummy_base_prismatic_y_joint": 0.0,
"dummy_base_prismatic_x_joint": 0.0,
"dummy_base_revolute_z_joint": 0.0,
# franka arm
"panda_joint1": 0.0,
"panda_joint2": -0.569,
"panda_joint3": 0.0,
"panda_joint4": -2.810,
"panda_joint5": 0.0,
"panda_joint6": 3.037,
"panda_joint7": 0.741,
# tool
"panda_finger_joint.*": 0.035,
},
joint_vel={".*": 0.0},
),
actuators={
"base": ImplicitActuatorCfg(
joint_names_expr=["dummy_base_.*"],
velocity_limit=100.0,
effort_limit=1000.0,
stiffness=0.0,
damping=1e5,
),
"panda_shoulder": ImplicitActuatorCfg(
joint_names_expr=["panda_joint[1-4]"],
effort_limit=87.0,
velocity_limit=100.0,
stiffness=800.0,
damping=40.0,
),
"panda_forearm": ImplicitActuatorCfg(
joint_names_expr=["panda_joint[5-7]"],
effort_limit=12.0,
velocity_limit=100.0,
stiffness=800.0,
damping=40.0,
),
"panda_hand": ImplicitActuatorCfg(
joint_names_expr=["panda_finger_joint.*"],
effort_limit=200.0,
velocity_limit=0.2,
stiffness=1e5,
damping=1e3,
),
},
)
"""Configuration of Franka arm with Franka Hand on a Clearpath Ridgeback base using implicit actuator models.
The following control configuration is used:
* Base: velocity control with damping
* Arm: position control with damping (contains default position offsets)
* Hand: mimic control
"""
| 2,651 | Python | 30.571428 | 109 | 0.599774 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/universal_robots.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for the Universal Robots.
The following configuration parameters are available:
* :obj:`UR10_CFG`: The UR10 arm without a gripper.
Reference: https://github.com/ros-industrial/universal_robot
"""
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.assets.articulation import ArticulationCfg
from omni.isaac.orbit.utils.assets import ISAAC_ORBIT_NUCLEUS_DIR
##
# Configuration
##
UR10_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/UniversalRobots/UR10/ur10_instanceable.usd",
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
max_depenetration_velocity=5.0,
),
activate_contact_sensors=False,
),
init_state=ArticulationCfg.InitialStateCfg(
joint_pos={
"shoulder_pan_joint": 0.0,
"shoulder_lift_joint": -1.712,
"elbow_joint": 1.712,
"wrist_1_joint": 0.0,
"wrist_2_joint": 0.0,
"wrist_3_joint": 0.0,
},
),
actuators={
"arm": ImplicitActuatorCfg(
joint_names_expr=[".*"],
velocity_limit=100.0,
effort_limit=87.0,
stiffness=800.0,
damping=40.0,
),
},
)
"""Configuration of UR-10 arm using implicit actuator models."""
| 1,542 | Python | 27.054545 | 96 | 0.637484 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/franka.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for the Franka Emika robots.
The following configurations are available:
* :obj:`FRANKA_PANDA_CFG`: Franka Emika Panda robot with Panda hand
* :obj:`FRANKA_PANDA_HIGH_PD_CFG`: Franka Emika Panda robot with Panda hand with stiffer PD control
Reference: https://github.com/frankaemika/franka_ros
"""
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.assets.articulation import ArticulationCfg
from omni.isaac.orbit.utils.assets import ISAAC_ORBIT_NUCLEUS_DIR
##
# Configuration
##
FRANKA_PANDA_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/FrankaEmika/panda_instanceable.usd",
activate_contact_sensors=False,
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
max_depenetration_velocity=5.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True, solver_position_iteration_count=8, solver_velocity_iteration_count=0
),
# collision_props=sim_utils.CollisionPropertiesCfg(contact_offset=0.005, rest_offset=0.0),
),
init_state=ArticulationCfg.InitialStateCfg(
joint_pos={
"panda_joint1": 0.0,
"panda_joint2": -0.569,
"panda_joint3": 0.0,
"panda_joint4": -2.810,
"panda_joint5": 0.0,
"panda_joint6": 3.037,
"panda_joint7": 0.741,
"panda_finger_joint.*": 0.04,
},
),
actuators={
"panda_shoulder": ImplicitActuatorCfg(
joint_names_expr=["panda_joint[1-4]"],
effort_limit=87.0,
velocity_limit=2.175,
stiffness=80.0,
damping=4.0,
),
"panda_forearm": ImplicitActuatorCfg(
joint_names_expr=["panda_joint[5-7]"],
effort_limit=12.0,
velocity_limit=2.61,
stiffness=80.0,
damping=4.0,
),
"panda_hand": ImplicitActuatorCfg(
joint_names_expr=["panda_finger_joint.*"],
effort_limit=200.0,
velocity_limit=0.2,
stiffness=2e3,
damping=1e2,
),
},
soft_joint_pos_limit_factor=1.0,
)
"""Configuration of Franka Emika Panda robot."""
FRANKA_PANDA_HIGH_PD_CFG = FRANKA_PANDA_CFG.copy()
FRANKA_PANDA_HIGH_PD_CFG.spawn.rigid_props.disable_gravity = True
FRANKA_PANDA_HIGH_PD_CFG.actuators["panda_shoulder"].stiffness = 400.0
FRANKA_PANDA_HIGH_PD_CFG.actuators["panda_shoulder"].damping = 80.0
FRANKA_PANDA_HIGH_PD_CFG.actuators["panda_forearm"].stiffness = 400.0
FRANKA_PANDA_HIGH_PD_CFG.actuators["panda_forearm"].damping = 80.0
"""Configuration of Franka Emika Panda robot with stiffer PD control.
This configuration is useful for task-space control using differential IK.
"""
| 3,036 | Python | 33.511363 | 110 | 0.64888 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/cartpole.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for a simple Cartpole robot."""
from __future__ import annotations
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.assets import ArticulationCfg
from omni.isaac.orbit.utils.assets import ISAAC_ORBIT_NUCLEUS_DIR
CARTPOLE_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/Classic/Cartpole/cartpole.usd",
rigid_props=sim_utils.RigidBodyPropertiesCfg(
rigid_body_enabled=True,
max_linear_velocity=1000.0,
max_angular_velocity=1000.0,
max_depenetration_velocity=100.0,
enable_gyroscopic_forces=True,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=False,
solver_position_iteration_count=4,
solver_velocity_iteration_count=0,
sleep_threshold=0.005,
stabilization_threshold=0.001,
),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 2.0), joint_pos={"slider_to_cart": 0.0, "cart_to_pole": 0.0}
),
actuators={
"cart_actuator": ImplicitActuatorCfg(
joint_names_expr=["slider_to_cart"],
effort_limit=400.0,
velocity_limit=100.0,
stiffness=0.0,
damping=10.0,
),
"pole_actuator": ImplicitActuatorCfg(
joint_names_expr=["cart_to_pole"], effort_limit=400.0, velocity_limit=100.0, stiffness=0.0, damping=0.0
),
},
)
| 1,711 | Python | 33.938775 | 115 | 0.641146 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/allegro.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for the Allegro Hand robots from Wonik Robotics.
The following configurations are available:
* :obj:`ALLEGRO_HAND_CFG`: Allegro Hand with implicit actuator model.
Reference:
* https://www.wonikrobotics.com/robot-hand
"""
from __future__ import annotations
import math
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators.actuator_cfg import ImplicitActuatorCfg
from omni.isaac.orbit.assets.articulation import ArticulationCfg
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
##
# Configuration
##
ALLEGRO_HAND_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Robots/AllegroHand/allegro_hand_instanceable.usd",
activate_contact_sensors=False,
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=True,
retain_accelerations=False,
enable_gyroscopic_forces=False,
angular_damping=0.01,
max_linear_velocity=1000.0,
max_angular_velocity=64 / math.pi * 180.0,
max_depenetration_velocity=1000.0,
max_contact_impulse=1e32,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True,
solver_position_iteration_count=8,
solver_velocity_iteration_count=0,
sleep_threshold=0.005,
stabilization_threshold=0.0005,
),
# collision_props=sim_utils.CollisionPropertiesCfg(contact_offset=0.005, rest_offset=0.0),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.5),
rot=(0.257551, 0.283045, 0.683330, -0.621782),
joint_pos={"^(?!thumb_joint_0).*": 0.0, "thumb_joint_0": 0.28},
),
actuators={
"fingers": ImplicitActuatorCfg(
joint_names_expr=[".*"],
effort_limit=0.5,
velocity_limit=100.0,
stiffness=3.0,
damping=0.1,
friction=0.01,
),
},
soft_joint_pos_limit_factor=1.0,
)
"""Configuration of Allegro Hand robot."""
| 2,220 | Python | 29.847222 | 98 | 0.647748 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/kinova.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for the Kinova Robotics arms.
The following configuration parameters are available:
* :obj:`KINOVA_JACO2_N7S300_CFG`: The Kinova JACO2 (7-Dof) arm with a 3-finger gripper.
* :obj:`KINOVA_JACO2_N6S300_CFG`: The Kinova JACO2 (6-Dof) arm with a 3-finger gripper.
* :obj:`KINOVA_GEN3_N7_CFG`: The Kinova Gen3 (7-Dof) arm with no gripper.
Reference: https://github.com/Kinovarobotics/kinova-ros
"""
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ImplicitActuatorCfg
from omni.isaac.orbit.assets.articulation import ArticulationCfg
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
##
# Configuration
##
KINOVA_JACO2_N7S300_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Robots/Kinova/Jaco2/J2N7S300/j2n7s300_instanceable.usd",
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
max_depenetration_velocity=5.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True, solver_position_iteration_count=8, solver_velocity_iteration_count=0
),
activate_contact_sensors=False,
),
init_state=ArticulationCfg.InitialStateCfg(
joint_pos={
"j2n7s300_joint_1": 0.0,
"j2n7s300_joint_2": 2.76,
"j2n7s300_joint_3": 0.0,
"j2n7s300_joint_4": 2.0,
"j2n7s300_joint_5": 2.0,
"j2n7s300_joint_6": 0.0,
"j2n7s300_joint_7": 0.0,
"j2n7s300_joint_finger_[1-3]": 0.2, # close: 1.2, open: 0.2
"j2n7s300_joint_finger_tip_[1-3]": 0.2, # close: 1.2, open: 0.2
},
),
actuators={
"arm": ImplicitActuatorCfg(
joint_names_expr=[".*_joint_[1-7]"],
velocity_limit=100.0,
effort_limit={
".*_joint_[1-2]": 80.0,
".*_joint_[3-4]": 40.0,
".*_joint_[5-7]": 20.0,
},
stiffness={
".*_joint_[1-4]": 40.0,
".*_joint_[5-7]": 15.0,
},
damping={
".*_joint_[1-4]": 1.0,
".*_joint_[5-7]": 0.5,
},
),
"gripper": ImplicitActuatorCfg(
joint_names_expr=[".*_finger_[1-3]", ".*_finger_tip_[1-3]"],
velocity_limit=100.0,
effort_limit=2.0,
stiffness=1.2,
damping=0.01,
),
},
)
"""Configuration of Kinova JACO2 (7-Dof) arm with 3-finger gripper."""
KINOVA_JACO2_N6S300_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Robots/Kinova/Jaco2/J2N6S300/j2n6s300_instanceable.usd",
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
max_depenetration_velocity=5.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True, solver_position_iteration_count=8, solver_velocity_iteration_count=0
),
activate_contact_sensors=False,
),
init_state=ArticulationCfg.InitialStateCfg(
joint_pos={
"j2n6s300_joint_1": 0.0,
"j2n6s300_joint_2": 2.76,
"j2n6s300_joint_3": 2.76,
"j2n6s300_joint_4": 2.5,
"j2n6s300_joint_5": 2.0,
"j2n6s300_joint_6": 0.0,
"j2n6s300_joint_finger_[1-3]": 0.2, # close: 1.2, open: 0.2
"j2n6s300_joint_finger_tip_[1-3]": 0.2, # close: 1.2, open: 0.2
},
),
actuators={
"arm": ImplicitActuatorCfg(
joint_names_expr=[".*_joint_[1-6]"],
velocity_limit=100.0,
effort_limit={
".*_joint_[1-2]": 80.0,
".*_joint_3": 40.0,
".*_joint_[4-6]": 20.0,
},
stiffness={
".*_joint_[1-3]": 40.0,
".*_joint_[4-6]": 15.0,
},
damping={
".*_joint_[1-3]": 1.0,
".*_joint_[4-6]": 0.5,
},
),
"gripper": ImplicitActuatorCfg(
joint_names_expr=[".*_finger_[1-3]", ".*_finger_tip_[1-3]"],
velocity_limit=100.0,
effort_limit=2.0,
stiffness=1.2,
damping=0.01,
),
},
)
"""Configuration of Kinova JACO2 (6-Dof) arm with 3-finger gripper."""
KINOVA_GEN3_N7_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Robots/Kinova/Gen3/gen3n7_instanceable.usd",
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
max_depenetration_velocity=5.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True, solver_position_iteration_count=8, solver_velocity_iteration_count=0
),
activate_contact_sensors=False,
),
init_state=ArticulationCfg.InitialStateCfg(
joint_pos={
"joint_1": 0.0,
"joint_2": 0.65,
"joint_3": 0.0,
"joint_4": 1.89,
"joint_5": 0.0,
"joint_6": 0.6,
"joint_7": -1.57,
},
),
actuators={
"arm": ImplicitActuatorCfg(
joint_names_expr=["joint_[1-7]"],
velocity_limit=100.0,
effort_limit={
"joint_[1-4]": 39.0,
"joint_[5-7]": 9.0,
},
stiffness={
"joint_[1-4]": 40.0,
"joint_[5-7]": 15.0,
},
damping={
"joint_[1-4]": 1.0,
"joint_[5-7]": 0.5,
},
),
},
)
"""Configuration of Kinova Gen3 (7-Dof) arm with no gripper."""
| 5,949 | Python | 32.055555 | 110 | 0.525971 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/anymal.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Configuration for the ANYbotics robots.
The following configuration parameters are available:
* :obj:`ANYMAL_B_CFG`: The ANYmal-B robot with ANYdrives 3.0
* :obj:`ANYMAL_C_CFG`: The ANYmal-C robot with ANYdrives 3.0
* :obj:`ANYMAL_D_CFG`: The ANYmal-D robot with ANYdrives 3.0
Reference:
* https://github.com/ANYbotics/anymal_b_simple_description
* https://github.com/ANYbotics/anymal_c_simple_description
* https://github.com/ANYbotics/anymal_d_simple_description
"""
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.actuators import ActuatorNetLSTMCfg, DCMotorCfg
from omni.isaac.orbit.assets.articulation import ArticulationCfg
from omni.isaac.orbit.utils.assets import ISAAC_ORBIT_NUCLEUS_DIR
##
# Configuration - Actuators.
##
ANYDRIVE_3_SIMPLE_ACTUATOR_CFG = DCMotorCfg(
joint_names_expr=[".*HAA", ".*HFE", ".*KFE"],
saturation_effort=120.0,
effort_limit=80.0,
velocity_limit=7.5,
stiffness={".*": 40.0},
damping={".*": 5.0},
)
"""Configuration for ANYdrive 3.x with DC actuator model."""
ANYDRIVE_3_LSTM_ACTUATOR_CFG = ActuatorNetLSTMCfg(
joint_names_expr=[".*HAA", ".*HFE", ".*KFE"],
network_file=f"{ISAAC_ORBIT_NUCLEUS_DIR}/ActuatorNets/ANYbotics/anydrive_3_lstm_jit.pt",
saturation_effort=120.0,
effort_limit=80.0,
velocity_limit=7.5,
)
"""Configuration for ANYdrive 3.0 (used on ANYmal-C) with LSTM actuator model."""
##
# Configuration - Articulation.
##
ANYMAL_B_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/ANYbotics/ANYmal-B/anymal_b.usd",
activate_contact_sensors=True,
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
retain_accelerations=False,
linear_damping=0.0,
angular_damping=0.0,
max_linear_velocity=1000.0,
max_angular_velocity=1000.0,
max_depenetration_velocity=1.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True, solver_position_iteration_count=4, solver_velocity_iteration_count=0
),
# collision_props=sim_utils.CollisionPropertiesCfg(contact_offset=0.02, rest_offset=0.0),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.6),
joint_pos={
".*HAA": 0.0, # all HAA
".*F_HFE": 0.4, # both front HFE
".*H_HFE": -0.4, # both hind HFE
".*F_KFE": -0.8, # both front KFE
".*H_KFE": 0.8, # both hind KFE
},
),
actuators={"legs": ANYDRIVE_3_LSTM_ACTUATOR_CFG},
soft_joint_pos_limit_factor=0.95,
)
"""Configuration of ANYmal-B robot using actuator-net."""
ANYMAL_C_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/ANYbotics/ANYmal-C/anymal_c.usd",
# usd_path=f"{ISAAC_NUCLEUS_DIR}/Robots/ANYbotics/anymal_instanceable.usd",
activate_contact_sensors=True,
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
retain_accelerations=False,
linear_damping=0.0,
angular_damping=0.0,
max_linear_velocity=1000.0,
max_angular_velocity=1000.0,
max_depenetration_velocity=1.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True, solver_position_iteration_count=4, solver_velocity_iteration_count=0
),
# collision_props=sim_utils.CollisionPropertiesCfg(contact_offset=0.02, rest_offset=0.0),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.6),
joint_pos={
".*HAA": 0.0, # all HAA
".*F_HFE": 0.4, # both front HFE
".*H_HFE": -0.4, # both hind HFE
".*F_KFE": -0.8, # both front KFE
".*H_KFE": 0.8, # both hind KFE
},
),
actuators={"legs": ANYDRIVE_3_LSTM_ACTUATOR_CFG},
soft_joint_pos_limit_factor=0.95,
)
"""Configuration of ANYmal-C robot using actuator-net."""
ANYMAL_D_CFG = ArticulationCfg(
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/ANYbotics/ANYmal-D/anymal_d.usd",
# usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/ANYbotics/ANYmal-D/anymal_d_minimal.usd",
activate_contact_sensors=True,
rigid_props=sim_utils.RigidBodyPropertiesCfg(
disable_gravity=False,
retain_accelerations=False,
linear_damping=0.0,
angular_damping=0.0,
max_linear_velocity=1000.0,
max_angular_velocity=1000.0,
max_depenetration_velocity=1.0,
),
articulation_props=sim_utils.ArticulationRootPropertiesCfg(
enabled_self_collisions=True, solver_position_iteration_count=4, solver_velocity_iteration_count=0
),
# collision_props=sim_utils.CollisionPropertiesCfg(contact_offset=0.02, rest_offset=0.0),
),
init_state=ArticulationCfg.InitialStateCfg(
pos=(0.0, 0.0, 0.6),
joint_pos={
".*HAA": 0.0, # all HAA
".*F_HFE": 0.4, # both front HFE
".*H_HFE": -0.4, # both hind HFE
".*F_KFE": -0.8, # both front KFE
".*H_KFE": 0.8, # both hind KFE
},
),
actuators={"legs": ANYDRIVE_3_LSTM_ACTUATOR_CFG},
soft_joint_pos_limit_factor=0.95,
)
"""Configuration of ANYmal-D robot using actuator-net.
Note:
Since we don't have a publicly available actuator network for ANYmal-D, we use the same network as ANYmal-C.
This may impact the sim-to-real transfer performance.
"""
| 5,833 | Python | 34.791411 | 112 | 0.633636 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/docs/CHANGELOG.rst | Changelog
---------
0.1.2 (2024-04-03)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added configurations for different arms from Kinova Robotics and Rethink Robotics.
0.1.1 (2024-03-11)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Added configurations for allegro and shadow hand assets.
0.1.0 (2023-12-20)
~~~~~~~~~~~~~~~~~~
Added
^^^^^
* Moved all assets' configuration from ``omni.isaac.orbit`` to ``omni.isaac.orbit_assets`` extension.
| 427 | reStructuredText | 13.75862 | 101 | 0.590164 |
NVIDIA-Omniverse/orbit/source/extensions/omni.isaac.orbit_assets/docs/README.md | # Orbit: Assets for Robots and Objects
This extension contains configurations for various assets and sensors. The configuration instances are
used to spawn and configure the instances in the simulation. They are passed to their corresponding
classes during construction.
## Organizing custom assets
For Orbit, we primarily store assets on the Omniverse Nucleus server. However, at times, it may be
needed to store the assets locally (for debugging purposes). In such cases, the extension's `data`
directory can be used for temporary hosting of assets.
Inside the `data` directory, we recommend following the same structure as our Nucleus directory
`Isaac/Samples/Orbit`. This helps us later to move these assets to the Nucleus server seamlessly.
The recommended directory structure inside `data` is as follows:
* **`Robots/<Company-Name>/<Robot-Name>`**: The USD files should be inside `<Robot-Name>` directory with
the name of the robot.
* **`Props/<Prop-Type>/<Prop-Name>`**: The USD files should be inside `<Prop-Name>` directory with the name
of the prop. This includes mounts, objects and markers.
* **`ActuatorNets/<Company-Name>`**: The actuator networks should inside `<Company-Name` directory with the
name of the actuator that it models.
* **`Policies/<Task-Name>`**: The policy should be JIT/ONNX compiled with the name `policy.pt`. It should also
contain the parameters used for training the checkpoint. This is to ensure reproducibility.
* **`Test/<Test-Name>`**: The asset used for unit testing purposes.
## Referring to the assets in your code
You can use the following snippet to refer to the assets:
```python
from omni.isaac.orbit_assets import ORBIT_ASSETS_DATA_DIR
# ANYmal-C
ANYMAL_C_USD_PATH = f"{ORBIT_ASSETS_DATA_DIR}/Robots/ANYbotics/ANYmal-C/anymal_c.usd"
# ANYmal-D
ANYMAL_D_USD_PATH = f"{ORBIT_ASSETS_DATA_DIR}/Robots/ANYbotics/ANYmal-D/anymal_d.usd"
```
| 1,903 | Markdown | 44.333332 | 110 | 0.763531 |
NVIDIA-Omniverse/orbit/source/standalone/tools/convert_mesh.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Utility to convert a OBJ/STL/FBX into USD format.
The OBJ file format is a simple data-format that represents 3D geometry alone — namely, the position
of each vertex, the UV position of each texture coordinate vertex, vertex normals, and the faces that
make each polygon defined as a list of vertices, and texture vertices.
An STL file describes a raw, unstructured triangulated surface by the unit normal and vertices (ordered
by the right-hand rule) of the triangles using a three-dimensional Cartesian coordinate system.
FBX files are a type of 3D model file created using the Autodesk FBX software. They can be designed and
modified in various modeling applications, such as Maya, 3ds Max, and Blender. Moreover, FBX files typically
contain mesh, material, texture, and skeletal animation data.
Link: https://www.autodesk.com/products/fbx/overview
This script uses the asset converter extension from Isaac Sim (``omni.kit.asset_converter``) to convert a
OBJ/STL/FBX asset into USD format. It is designed as a convenience script for command-line use.
positional arguments:
input The path to the input mesh (.OBJ/.STL/.FBX) file.
output The path to store the USD file.
optional arguments:
-h, --help Show this help message and exit
--make-instanceable, Make the asset instanceable for efficient cloning. (default: False)
--collision-approximation The method used for approximating collision mesh. Defaults to convexDecomposition.
Set to \"none\" to not add a collision mesh to the converted mesh. (default: convexDecomposition)
--mass The mass (in kg) to assign to the converted asset. (default: None)
"""
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Utility to convert a mesh file into USD format.")
parser.add_argument("input", type=str, help="The path to the input mesh file.")
parser.add_argument("output", type=str, help="The path to store the USD file.")
parser.add_argument(
"--make-instanceable",
action="store_true",
default=False,
help="Make the asset instanceable for efficient cloning.",
)
parser.add_argument(
"--collision-approximation",
type=str,
default="convexDecomposition",
choices=["convexDecomposition", "convexHull", "none"],
help=(
'The method used for approximating collision mesh. Set to "none" '
"to not add a collision mesh to the converted mesh."
),
)
parser.add_argument(
"--mass",
type=float,
default=None,
help="The mass (in kg) to assign to the converted asset. If not provided, then no mass is added.",
)
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import contextlib
import os
import carb
import omni.isaac.core.utils.stage as stage_utils
import omni.kit.app
from omni.isaac.orbit.sim.converters import MeshConverter, MeshConverterCfg
from omni.isaac.orbit.sim.schemas import schemas_cfg
from omni.isaac.orbit.utils.assets import check_file_path
from omni.isaac.orbit.utils.dict import print_dict
def main():
# check valid file path
mesh_path = args_cli.input
if not os.path.isabs(mesh_path):
mesh_path = os.path.abspath(mesh_path)
if not check_file_path(mesh_path):
raise ValueError(f"Invalid mesh file path: {mesh_path}")
# create destination path
dest_path = args_cli.output
if not os.path.isabs(dest_path):
dest_path = os.path.abspath(dest_path)
print(dest_path)
print(os.path.dirname(dest_path))
print(os.path.basename(dest_path))
# Mass properties
if args_cli.mass is not None:
mass_props = schemas_cfg.MassPropertiesCfg(mass=args_cli.mass)
rigid_props = schemas_cfg.RigidBodyPropertiesCfg()
else:
mass_props = None
rigid_props = None
# Collision properties
collision_props = schemas_cfg.CollisionPropertiesCfg(collision_enabled=args_cli.collision_approximation != "none")
# Create Mesh converter config
mesh_converter_cfg = MeshConverterCfg(
mass_props=mass_props,
rigid_props=rigid_props,
collision_props=collision_props,
asset_path=mesh_path,
force_usd_conversion=True,
usd_dir=os.path.dirname(dest_path),
usd_file_name=os.path.basename(dest_path),
make_instanceable=args_cli.make_instanceable,
collision_approximation=args_cli.collision_approximation,
)
# Print info
print("-" * 80)
print("-" * 80)
print(f"Input Mesh file: {mesh_path}")
print("Mesh importer config:")
print_dict(mesh_converter_cfg.to_dict(), nesting=0)
print("-" * 80)
print("-" * 80)
# Create Mesh converter and import the file
mesh_converter = MeshConverter(mesh_converter_cfg)
# print output
print("Mesh importer output:")
print(f"Generated USD file: {mesh_converter.usd_path}")
print("-" * 80)
print("-" * 80)
# Determine if there is a GUI to update:
# acquire settings interface
carb_settings_iface = carb.settings.get_settings()
# read flag for whether a local GUI is enabled
local_gui = carb_settings_iface.get("/app/window/enabled")
# read flag for whether livestreaming GUI is enabled
livestream_gui = carb_settings_iface.get("/app/livestream/enabled")
# Simulate scene (if not headless)
if local_gui or livestream_gui:
# Open the stage with USD
stage_utils.open_stage(mesh_converter.usd_path)
# Reinitialize the simulation
app = omni.kit.app.get_app_interface()
# Run simulation
with contextlib.suppress(KeyboardInterrupt):
while app.is_running():
# perform step
app.update()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 6,290 | Python | 33.95 | 129 | 0.692846 |
NVIDIA-Omniverse/orbit/source/standalone/tools/check_instanceable.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script uses the cloner API to check if asset has been instanced properly.
Usage with different inputs (replace `<Asset-Path>` and `<Asset-Path-Instanced>` with the path to the
original asset and the instanced asset respectively):
```bash
./orbit.sh -p source/tools/check_instanceable.py <Asset-Path> -n 4096 --headless --physics
./orbit.sh -p source/tools/check_instanceable.py <Asset-Path-Instanced> -n 4096 --headless --physics
./orbit.sh -p source/tools/check_instanceable.py <Asset-Path> -n 4096 --headless
./orbit.sh -p source/tools/check_instanceable.py <Asset-Path-Instanced> -n 4096 --headless
```
Output from the above commands:
```bash
>>> Cloning time (cloner.clone): 0.648198 seconds
>>> Setup time (sim.reset): : 5.843589 seconds
[#clones: 4096, physics: True] Asset: <Asset-Path-Instanced> : 6.491870 seconds
>>> Cloning time (cloner.clone): 0.693133 seconds
>>> Setup time (sim.reset): 50.860526 seconds
[#clones: 4096, physics: True] Asset: <Asset-Path> : 51.553743 seconds
>>> Cloning time (cloner.clone) : 0.687201 seconds
>>> Setup time (sim.reset) : 6.302215 seconds
[#clones: 4096, physics: False] Asset: <Asset-Path-Instanced> : 6.989500 seconds
>>> Cloning time (cloner.clone) : 0.678150 seconds
>>> Setup time (sim.reset) : 52.854054 seconds
[#clones: 4096, physics: False] Asset: <Asset-Path> : 53.532287 seconds
```
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
import contextlib
import os
# omni-isaac-orbit
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser("Utility to empirically check if asset in instanced properly.")
parser.add_argument("input", type=str, help="The path to the USD file.")
parser.add_argument("-n", "--num_clones", type=int, default=128, help="Number of clones to spawn.")
parser.add_argument("-s", "--spacing", type=float, default=1.5, help="Spacing between instances in a grid.")
parser.add_argument("-p", "--physics", action="store_true", default=False, help="Clone assets using physics cloner.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import omni.isaac.core.utils.prims as prim_utils
from omni.isaac.cloner import GridCloner
from omni.isaac.core.simulation_context import SimulationContext
from omni.isaac.core.utils.carb import set_carb_setting
from omni.isaac.orbit.utils import Timer
from omni.isaac.orbit.utils.assets import check_file_path
def main():
"""Spawns the USD asset robot and clones it using Isaac Gym Cloner API."""
# check valid file path
if not check_file_path(args_cli.input):
raise ValueError(f"Invalid file path: {args_cli.input}")
# Load kit helper
sim = SimulationContext(
stage_units_in_meters=1.0, physics_dt=0.01, rendering_dt=0.01, backend="torch", device="cuda:0"
)
# enable flatcache which avoids passing data over to USD structure
# this speeds up the read-write operation of GPU buffers
if sim.get_physics_context().use_gpu_pipeline:
sim.get_physics_context().enable_flatcache(True)
# enable hydra scene-graph instancing
# this is needed to visualize the scene when flatcache is enabled
set_carb_setting(sim._settings, "/persistent/omnihydra/useSceneGraphInstancing", True)
# Create interface to clone the scene
cloner = GridCloner(spacing=args_cli.spacing)
cloner.define_base_env("/World/envs")
prim_utils.define_prim("/World/envs/env_0")
# Spawn things into stage
prim_utils.create_prim("/World/Light", "DistantLight")
# Everything under the namespace "/World/envs/env_0" will be cloned
prim_utils.create_prim("/World/envs/env_0/Asset", "Xform", usd_path=os.path.abspath(args_cli.input))
# Clone the scene
num_clones = args_cli.num_clones
# Create a timer to measure the cloning time
with Timer(f"[#clones: {num_clones}, physics: {args_cli.physics}] Asset: {args_cli.input}"):
# Clone the scene
with Timer(">>> Cloning time (cloner.clone)"):
cloner.define_base_env("/World/envs")
envs_prim_paths = cloner.generate_paths("/World/envs/env", num_paths=num_clones)
_ = cloner.clone(
source_prim_path="/World/envs/env_0", prim_paths=envs_prim_paths, replicate_physics=args_cli.physics
)
# Play the simulator
with Timer(">>> Setup time (sim.reset)"):
sim.reset()
# Simulate scene (if not headless)
if not args_cli.headless:
with contextlib.suppress(KeyboardInterrupt):
while sim.is_playing():
# perform step
sim.step()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 5,076 | Python | 36.607407 | 117 | 0.6974 |
NVIDIA-Omniverse/orbit/source/standalone/tools/blender_obj.py | #!/usr/bin/env python
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Convert a mesh file to `.obj` using blender.
This file processes a given dae mesh file and saves the resulting mesh file in obj format.
It needs to be called using the python packaged with blender, i.e.:
blender --background --python blender_obj.py -- -in_file FILE -out_file FILE
For more information: https://docs.blender.org/api/current/index.html
The script was tested on Blender 3.2 on Ubuntu 20.04LTS.
"""
from __future__ import annotations
import bpy
import os
import sys
def parse_cli_args():
"""Parse the input command line arguments.
Reference: https://developer.blender.org/diffusion/B/browse/master/release/scripts/templates_py/background_job.py
"""
import argparse
# get the args passed to blender after "--", all of which are ignored by
# blender so scripts may receive their own arguments
argv = sys.argv
if "--" not in argv:
argv = [] # as if no args are passed
else:
argv = argv[argv.index("--") + 1 :] # get all args after "--"
# When --help or no args are given, print this help
usage_text = (
f"Run blender in background mode with this script:\n\tblender --background --python {__file__} -- [options]"
)
parser = argparse.ArgumentParser(description=usage_text)
# Add arguments
parser.add_argument("-i", "--in_file", metavar="FILE", type=str, required=True, help="Path to input OBJ file.")
parser.add_argument("-o", "--out_file", metavar="FILE", type=str, required=True, help="Path to output OBJ file.")
args = parser.parse_args(argv)
# Check if any arguments provided
if not argv or not args.in_file or not args.out_file:
parser.print_help()
return None
# return arguments
return args
def convert_to_obj(in_file: str, out_file: str, save_usd: bool = False):
"""Convert a mesh file to `.obj` using blender.
Args:
in_file: Input mesh file to process.
out_file: Path to store output obj file.
"""
# check valid input file
if not os.path.exists(in_file):
raise FileNotFoundError(in_file)
# add ending of file format
if not out_file.endswith(".obj"):
out_file += ".obj"
# create directory if it doesn't exist for destination file
if not os.path.exists(os.path.dirname(out_file)):
os.makedirs(os.path.dirname(out_file), exist_ok=True)
# reset scene to empty
bpy.ops.wm.read_factory_settings(use_empty=True)
# load object into scene
if in_file.endswith(".dae"):
bpy.ops.wm.collada_import(filepath=in_file)
elif in_file.endswith(".stl") or in_file.endswith(".STL"):
bpy.ops.import_mesh.stl(filepath=in_file)
else:
raise ValueError(f"Input file not in dae/stl format: {in_file}")
# convert to obj format and store with z up
# TODO: Read the convention from dae file instead of manually fixing it.
# Reference: https://docs.blender.org/api/2.79/bpy.ops.export_scene.html
bpy.ops.export_scene.obj(
filepath=out_file, check_existing=False, axis_forward="Y", axis_up="Z", global_scale=1, path_mode="RELATIVE"
)
# save it as usd as well
if save_usd:
out_file = out_file.replace("obj", "usd")
bpy.ops.wm.usd_export(filepath=out_file, check_existing=False)
if __name__ == "__main__":
# read arguments
cli_args = parse_cli_args()
# check CLI args
if cli_args is None:
sys.exit()
# process via blender
convert_to_obj(cli_args.in_file, cli_args.out_file)
| 3,662 | Python | 33.233645 | 117 | 0.659203 |
NVIDIA-Omniverse/orbit/source/standalone/tools/process_meshes_to_obj.py | #!/usr/bin/env python
# Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Convert all mesh files to `.obj` in given folders."""
from __future__ import annotations
import argparse
import os
import shutil
import subprocess
# Constants
# Path to blender
BLENDER_EXE_PATH = shutil.which("blender")
def parse_cli_args():
"""Parse the input command line arguments.
Reference: https://developer.blender.org/diffusion/B/browse/master/release/scripts/templates_py/background_job.py
"""
# add argparse arguments
parser = argparse.ArgumentParser("Utility to convert all mesh files to `.obj` in given folders.")
parser.add_argument("input_dir", type=str, help="The input directory from which to load meshes.")
parser.add_argument(
"-o",
"--output_dir",
type=str,
default=None,
help="The output directory to save converted meshes into. Default is same as input directory.",
)
args_cli = parser.parse_args()
# resolve output directory
if args_cli.output_dir is None:
args_cli.output_dir = args_cli.input_dir
# return arguments
return args_cli
def run_blender_convert2obj(in_file: str, out_file: str):
"""Calls the python script using `subprocess` to perform processing of mesh file.
Args:
in_file: Input mesh file.
out_file: Output obj file.
"""
# resolve for python file
tools_dirname = os.path.dirname(os.path.abspath(__file__))
script_file = os.path.join(tools_dirname, "blender_obj.py")
# complete command
command_exe = f"{BLENDER_EXE_PATH} --background --python {script_file} -- -i {in_file} -o {out_file}"
# break command into list
command_exe_list = command_exe.split(" ")
# run command
subprocess.run(command_exe_list)
def convert_meshes(source_folders: list[str], destination_folders: list[str]):
"""Processes all mesh files of supported format into OBJ file using blender.
Args:
source_folders: List of directories to search for meshes.
destination_folders: List of directories to dump converted files.
"""
# create folder for corresponding destination
for folder in destination_folders:
os.makedirs(folder, exist_ok=True)
# iterate over each folder
for in_folder, out_folder in zip(source_folders, destination_folders):
# extract all dae files in the directory
mesh_filenames = [f for f in os.listdir(in_folder) if f.endswith("dae")]
mesh_filenames += [f for f in os.listdir(in_folder) if f.endswith("stl")]
mesh_filenames += [f for f in os.listdir(in_folder) if f.endswith("STL")]
# print status
print(f"Found {len(mesh_filenames)} files to process in directory: {in_folder}")
# iterate over each OBJ file
for mesh_file in mesh_filenames:
# extract meshname
mesh_name = os.path.splitext(mesh_file)[0]
# complete path of input and output files
in_file_path = os.path.join(in_folder, mesh_file)
out_file_path = os.path.join(out_folder, mesh_name + ".obj")
# perform blender processing
print("Processing: ", in_file_path)
run_blender_convert2obj(in_file_path, out_file_path)
if __name__ == "__main__":
# Parse command line arguments
args = parse_cli_args()
# Run conversion
convert_meshes([args.input_dir], [args.output_dir])
| 3,502 | Python | 34.744898 | 117 | 0.659909 |
NVIDIA-Omniverse/orbit/source/standalone/tools/convert_urdf.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Utility to convert a URDF into USD format.
Unified Robot Description Format (URDF) is an XML file format used in ROS to describe all elements of
a robot. For more information, see: http://wiki.ros.org/urdf
This script uses the URDF importer extension from Isaac Sim (``omni.isaac.urdf_importer``) to convert a
URDF asset into USD format. It is designed as a convenience script for command-line use. For more
information on the URDF importer, see the documentation for the extension:
https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/ext_omni_isaac_urdf.html
positional arguments:
input The path to the input URDF file.
output The path to store the USD file.
optional arguments:
-h, --help Show this help message and exit
--merge-joints Consolidate links that are connected by fixed joints. (default: False)
--fix-base Fix the base to where it is imported. (default: False)
--make-instanceable Make the asset instanceable for efficient cloning. (default: False)
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Utility to convert a URDF into USD format.")
parser.add_argument("input", type=str, help="The path to the input URDF file.")
parser.add_argument("output", type=str, help="The path to store the USD file.")
parser.add_argument(
"--merge-joints",
action="store_true",
default=False,
help="Consolidate links that are connected by fixed joints.",
)
parser.add_argument("--fix-base", action="store_true", default=False, help="Fix the base to where it is imported.")
parser.add_argument(
"--make-instanceable",
action="store_true",
default=False,
help="Make the asset instanceable for efficient cloning.",
)
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import contextlib
import os
import carb
import omni.isaac.core.utils.stage as stage_utils
import omni.kit.app
from omni.isaac.orbit.sim.converters import UrdfConverter, UrdfConverterCfg
from omni.isaac.orbit.utils.assets import check_file_path
from omni.isaac.orbit.utils.dict import print_dict
def main():
# check valid file path
urdf_path = args_cli.input
if not os.path.isabs(urdf_path):
urdf_path = os.path.abspath(urdf_path)
if not check_file_path(urdf_path):
raise ValueError(f"Invalid file path: {urdf_path}")
# create destination path
dest_path = args_cli.output
if not os.path.isabs(dest_path):
dest_path = os.path.abspath(dest_path)
# Create Urdf converter config
urdf_converter_cfg = UrdfConverterCfg(
asset_path=urdf_path,
usd_dir=os.path.dirname(dest_path),
usd_file_name=os.path.basename(dest_path),
fix_base=args_cli.fix_base,
merge_fixed_joints=args_cli.merge_joints,
force_usd_conversion=True,
make_instanceable=args_cli.make_instanceable,
)
# Print info
print("-" * 80)
print("-" * 80)
print(f"Input URDF file: {urdf_path}")
print("URDF importer config:")
print_dict(urdf_converter_cfg.to_dict(), nesting=0)
print("-" * 80)
print("-" * 80)
# Create Urdf converter and import the file
urdf_converter = UrdfConverter(urdf_converter_cfg)
# print output
print("URDF importer output:")
print(f"Generated USD file: {urdf_converter.usd_path}")
print("-" * 80)
print("-" * 80)
# Determine if there is a GUI to update:
# acquire settings interface
carb_settings_iface = carb.settings.get_settings()
# read flag for whether a local GUI is enabled
local_gui = carb_settings_iface.get("/app/window/enabled")
# read flag for whether livestreaming GUI is enabled
livestream_gui = carb_settings_iface.get("/app/livestream/enabled")
# Simulate scene (if not headless)
if local_gui or livestream_gui:
# Open the stage with USD
stage_utils.open_stage(urdf_converter.usd_path)
# Reinitialize the simulation
app = omni.kit.app.get_app_interface()
# Run simulation
with contextlib.suppress(KeyboardInterrupt):
while app.is_running():
# perform step
app.update()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 4,784 | Python | 32 | 115 | 0.687709 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/01_assets/run_articulation.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This script demonstrates how to spawn a cart-pole and interact with it.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/01_assets/run_articulation.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Tutorial on spawning and interacting with an articulation.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import torch
import omni.isaac.core.utils.prims as prim_utils
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import Articulation
from omni.isaac.orbit.sim import SimulationContext
##
# Pre-defined configs
##
from omni.isaac.orbit_assets import CARTPOLE_CFG # isort:skip
def design_scene() -> tuple[dict, list[list[float]]]:
"""Designs the scene."""
# Ground-plane
cfg = sim_utils.GroundPlaneCfg()
cfg.func("/World/defaultGroundPlane", cfg)
# Lights
cfg = sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
cfg.func("/World/Light", cfg)
# Create separate groups called "Origin1", "Origin2", "Origin3"
# Each group will have a robot in it
origins = [[0.0, 0.0, 0.0], [-1.0, 0.0, 0.0]]
# Origin 1
prim_utils.create_prim("/World/Origin1", "Xform", translation=origins[0])
# Origin 2
prim_utils.create_prim("/World/Origin2", "Xform", translation=origins[1])
# Articulation
cartpole_cfg = CARTPOLE_CFG.copy()
cartpole_cfg.prim_path = "/World/Origin.*/Robot"
cartpole = Articulation(cfg=cartpole_cfg)
# return the scene information
scene_entities = {"cartpole": cartpole}
return scene_entities, origins
def run_simulator(sim: sim_utils.SimulationContext, entities: dict[str, Articulation], origins: torch.Tensor):
"""Runs the simulation loop."""
# Extract scene entities
# note: we only do this here for readability. In general, it is better to access the entities directly from
# the dictionary. This dictionary is replaced by the InteractiveScene class in the next tutorial.
robot = entities["cartpole"]
# Define simulation stepping
sim_dt = sim.get_physics_dt()
count = 0
# Simulation loop
while simulation_app.is_running():
# Reset
if count % 500 == 0:
# reset counter
count = 0
# reset the scene entities
# root state
# we offset the root state by the origin since the states are written in simulation world frame
# if this is not done, then the robots will be spawned at the (0, 0, 0) of the simulation world
root_state = robot.data.default_root_state.clone()
root_state[:, :3] += origins
robot.write_root_state_to_sim(root_state)
# set joint positions with some noise
joint_pos, joint_vel = robot.data.default_joint_pos.clone(), robot.data.default_joint_vel.clone()
joint_pos += torch.rand_like(joint_pos) * 0.1
robot.write_joint_state_to_sim(joint_pos, joint_vel)
# clear internal buffers
robot.reset()
print("[INFO]: Resetting robot state...")
# Apply random action
# -- generate random joint efforts
efforts = torch.randn_like(robot.data.joint_pos) * 5.0
# -- apply action to the robot
robot.set_joint_effort_target(efforts)
# -- write data to sim
robot.write_data_to_sim()
# Perform step
sim.step()
# Increment counter
count += 1
# Update buffers
robot.update(sim_dt)
def main():
"""Main function."""
# Load kit helper
sim_cfg = sim_utils.SimulationCfg(device="cpu", use_gpu_pipeline=False)
sim = SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view([2.5, 0.0, 4.0], [0.0, 0.0, 2.0])
# Design scene
scene_entities, scene_origins = design_scene()
scene_origins = torch.tensor(scene_origins, device=sim.device)
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene_entities, scene_origins)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 4,689 | Python | 31.344827 | 111 | 0.655577 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/01_assets/run_rigid_object.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates how to create a rigid object and interact with it.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/01_assets/run_rigid_object.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Tutorial on spawning and interacting with a rigid object.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import torch
import omni.isaac.core.utils.prims as prim_utils
import omni.isaac.orbit.sim as sim_utils
import omni.isaac.orbit.utils.math as math_utils
from omni.isaac.orbit.assets import RigidObject, RigidObjectCfg
from omni.isaac.orbit.sim import SimulationContext
def design_scene():
"""Designs the scene."""
# Ground-plane
cfg = sim_utils.GroundPlaneCfg()
cfg.func("/World/defaultGroundPlane", cfg)
# Lights
cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.8, 0.8, 0.8))
cfg.func("/World/Light", cfg)
# Create separate groups called "Origin1", "Origin2", "Origin3"
# Each group will have a robot in it
origins = [[0.25, 0.25, 0.0], [-0.25, 0.25, 0.0], [0.25, -0.25, 0.0], [-0.25, -0.25, 0.0]]
for i, origin in enumerate(origins):
prim_utils.create_prim(f"/World/Origin{i}", "Xform", translation=origin)
# Rigid Object
cone_cfg = RigidObjectCfg(
prim_path="/World/Origin.*/Cone",
spawn=sim_utils.ConeCfg(
radius=0.1,
height=0.2,
rigid_props=sim_utils.RigidBodyPropertiesCfg(),
mass_props=sim_utils.MassPropertiesCfg(mass=1.0),
collision_props=sim_utils.CollisionPropertiesCfg(),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 1.0, 0.0), metallic=0.2),
),
init_state=RigidObjectCfg.InitialStateCfg(),
)
cone_object = RigidObject(cfg=cone_cfg)
# return the scene information
scene_entities = {"cone": cone_object}
return scene_entities, origins
def run_simulator(sim: sim_utils.SimulationContext, entities: dict[str, RigidObject], origins: torch.Tensor):
"""Runs the simulation loop."""
# Extract scene entities
# note: we only do this here for readability. In general, it is better to access the entities directly from
# the dictionary. This dictionary is replaced by the InteractiveScene class in the next tutorial.
cone_object = entities["cone"]
# Define simulation stepping
sim_dt = sim.get_physics_dt()
sim_time = 0.0
count = 0
# Simulate physics
while simulation_app.is_running():
# reset
if count % 250 == 0:
# reset counters
sim_time = 0.0
count = 0
# reset root state
root_state = cone_object.data.default_root_state.clone()
# sample a random position on a cylinder around the origins
root_state[:, :3] += origins
root_state[:, :3] += math_utils.sample_cylinder(
radius=0.1, h_range=(0.25, 0.5), size=cone_object.num_instances, device=cone_object.device
)
# write root state to simulation
cone_object.write_root_state_to_sim(root_state)
# reset buffers
cone_object.reset()
print("----------------------------------------")
print("[INFO]: Resetting object state...")
# apply sim data
cone_object.write_data_to_sim()
# perform step
sim.step()
# update sim-time
sim_time += sim_dt
count += 1
# update buffers
cone_object.update(sim_dt)
# print the root position
if count % 50 == 0:
print(f"Root position (in world): {cone_object.data.root_state_w[:, :3]}")
def main():
"""Main function."""
# Load kit helper
sim_cfg = sim_utils.SimulationCfg()
sim = SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view(eye=[1.5, 0.0, 1.0], target=[0.0, 0.0, 0.0])
# Design scene
scene_entities, scene_origins = design_scene()
scene_origins = torch.tensor(scene_origins, device=sim.device)
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene_entities, scene_origins)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 4,847 | Python | 31.32 | 111 | 0.632763 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/02_scene/create_scene.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This script demonstrates how to use the interactive scene interface to setup a scene with multiple prims.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/03_scene/create_scene.py --num_envs 32
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Tutorial on using the interactive scene interface.")
parser.add_argument("--num_envs", type=int, default=2, help="Number of environments to spawn.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import torch
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg
from omni.isaac.orbit.scene import InteractiveScene, InteractiveSceneCfg
from omni.isaac.orbit.sim import SimulationContext
from omni.isaac.orbit.utils import configclass
##
# Pre-defined configs
##
from omni.isaac.orbit_assets import CARTPOLE_CFG # isort:skip
@configclass
class CartpoleSceneCfg(InteractiveSceneCfg):
"""Configuration for a cart-pole scene."""
# ground plane
ground = AssetBaseCfg(prim_path="/World/defaultGroundPlane", spawn=sim_utils.GroundPlaneCfg())
# lights
dome_light = AssetBaseCfg(
prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
)
# articulation
cartpole: ArticulationCfg = CARTPOLE_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
"""Runs the simulation loop."""
# Extract scene entities
# note: we only do this here for readability.
robot = scene["cartpole"]
# Define simulation stepping
sim_dt = sim.get_physics_dt()
count = 0
# Simulation loop
while simulation_app.is_running():
# Reset
if count % 500 == 0:
# reset counter
count = 0
# reset the scene entities
# root state
# we offset the root state by the origin since the states are written in simulation world frame
# if this is not done, then the robots will be spawned at the (0, 0, 0) of the simulation world
root_state = robot.data.default_root_state.clone()
root_state[:, :3] += scene.env_origins
robot.write_root_state_to_sim(root_state)
# set joint positions with some noise
joint_pos, joint_vel = robot.data.default_joint_pos.clone(), robot.data.default_joint_vel.clone()
joint_pos += torch.rand_like(joint_pos) * 0.1
robot.write_joint_state_to_sim(joint_pos, joint_vel)
# clear internal buffers
scene.reset()
print("[INFO]: Resetting robot state...")
# Apply random action
# -- generate random joint efforts
efforts = torch.randn_like(robot.data.joint_pos) * 5.0
# -- apply action to the robot
robot.set_joint_effort_target(efforts)
# -- write data to sim
scene.write_data_to_sim()
# Perform step
sim.step()
# Increment counter
count += 1
# Update buffers
scene.update(sim_dt)
def main():
"""Main function."""
# Load kit helper
sim_cfg = sim_utils.SimulationCfg(device="cpu", use_gpu_pipeline=False)
sim = SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view([2.5, 0.0, 4.0], [0.0, 0.0, 2.0])
# Design scene
scene_cfg = CartpoleSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
scene = InteractiveScene(scene_cfg)
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 4,251 | Python | 30.731343 | 109 | 0.663844 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/03_envs/create_cartpole_base_env.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates how to create a simple environment with a cartpole. It combines the concepts of
scene, action, observation and event managers to create an environment.
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Tutorial on creating a cartpole base environment.")
parser.add_argument("--num_envs", type=int, default=16, help="Number of environments to spawn.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import math
import torch
import omni.isaac.orbit.envs.mdp as mdp
from omni.isaac.orbit.envs import BaseEnv, BaseEnvCfg
from omni.isaac.orbit.managers import EventTermCfg as EventTerm
from omni.isaac.orbit.managers import ObservationGroupCfg as ObsGroup
from omni.isaac.orbit.managers import ObservationTermCfg as ObsTerm
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit_tasks.classic.cartpole.cartpole_env_cfg import CartpoleSceneCfg
@configclass
class ActionsCfg:
"""Action specifications for the environment."""
joint_efforts = mdp.JointEffortActionCfg(asset_name="robot", joint_names=["slider_to_cart"], scale=5.0)
@configclass
class ObservationsCfg:
"""Observation specifications for the environment."""
@configclass
class PolicyCfg(ObsGroup):
"""Observations for policy group."""
# observation terms (order preserved)
joint_pos_rel = ObsTerm(func=mdp.joint_pos_rel)
joint_vel_rel = ObsTerm(func=mdp.joint_vel_rel)
def __post_init__(self) -> None:
self.enable_corruption = False
self.concatenate_terms = True
# observation groups
policy: PolicyCfg = PolicyCfg()
@configclass
class EventCfg:
"""Configuration for events."""
# on startup
add_pole_mass = EventTerm(
func=mdp.add_body_mass,
mode="startup",
params={
"asset_cfg": SceneEntityCfg("robot", body_names=["pole"]),
"mass_range": (0.1, 0.5),
},
)
# on reset
reset_cart_position = EventTerm(
func=mdp.reset_joints_by_offset,
mode="reset",
params={
"asset_cfg": SceneEntityCfg("robot", joint_names=["slider_to_cart"]),
"position_range": (-1.0, 1.0),
"velocity_range": (-0.1, 0.1),
},
)
reset_pole_position = EventTerm(
func=mdp.reset_joints_by_offset,
mode="reset",
params={
"asset_cfg": SceneEntityCfg("robot", joint_names=["cart_to_pole"]),
"position_range": (-0.125 * math.pi, 0.125 * math.pi),
"velocity_range": (-0.01 * math.pi, 0.01 * math.pi),
},
)
@configclass
class CartpoleEnvCfg(BaseEnvCfg):
"""Configuration for the cartpole environment."""
# Scene settings
scene = CartpoleSceneCfg(num_envs=1024, env_spacing=2.5)
# Basic settings
observations = ObservationsCfg()
actions = ActionsCfg()
events = EventCfg()
def __post_init__(self):
"""Post initialization."""
# viewer settings
self.viewer.eye = [4.5, 0.0, 6.0]
self.viewer.lookat = [0.0, 0.0, 2.0]
# step settings
self.decimation = 4 # env step every 4 sim steps: 200Hz / 4 = 50Hz
# simulation settings
self.sim.dt = 0.005 # sim step every 5ms: 200Hz
def main():
"""Main function."""
# parse the arguments
env_cfg = CartpoleEnvCfg()
env_cfg.scene.num_envs = args_cli.num_envs
# setup base environment
env = BaseEnv(cfg=env_cfg)
# simulate physics
count = 0
while simulation_app.is_running():
with torch.inference_mode():
# reset
if count % 300 == 0:
count = 0
env.reset()
print("-" * 80)
print("[INFO]: Resetting environment...")
# sample random actions
joint_efforts = torch.randn_like(env.action_manager.action)
# step the environment
obs, _ = env.step(joint_efforts)
# print current orientation of pole
print("[Env 0]: Pole joint: ", obs["policy"][0][1].item())
# update counter
count += 1
# close the environment
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 4,839 | Python | 27.470588 | 107 | 0.633602 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/03_envs/run_cartpole_rl_env.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates how to run the RL environment for the cartpole balancing task.
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Tutorial on running the cartpole RL environment.")
parser.add_argument("--num_envs", type=int, default=16, help="Number of environments to spawn.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import torch
from omni.isaac.orbit.envs import RLTaskEnv
from omni.isaac.orbit_tasks.classic.cartpole.cartpole_env_cfg import CartpoleEnvCfg
def main():
"""Main function."""
# create environment configuration
env_cfg = CartpoleEnvCfg()
env_cfg.scene.num_envs = args_cli.num_envs
# setup RL environment
env = RLTaskEnv(cfg=env_cfg)
# simulate physics
count = 0
while simulation_app.is_running():
with torch.inference_mode():
# reset
if count % 300 == 0:
count = 0
env.reset()
print("-" * 80)
print("[INFO]: Resetting environment...")
# sample random actions
joint_efforts = torch.randn_like(env.action_manager.action)
# step the environment
obs, rew, terminated, truncated, info = env.step(joint_efforts)
# print current orientation of pole
print("[Env 0]: Pole joint: ", obs["policy"][0][1].item())
# update counter
count += 1
# close the environment
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 2,054 | Python | 25.688311 | 96 | 0.648978 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/05_controllers/run_diff_ik.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates how to use the differential inverse kinematics controller with the simulator.
The differential IK controller can be configured in different modes. It uses the Jacobians computed by
PhysX. This helps perform parallelized computation of the inverse kinematics.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/05_controllers/ik_control.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Tutorial on using the differential IK controller.")
parser.add_argument("--robot", type=str, default="franka_panda", help="Name of the robot.")
parser.add_argument("--num_envs", type=int, default=128, help="Number of environments to spawn.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import torch
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import AssetBaseCfg
from omni.isaac.orbit.controllers import DifferentialIKController, DifferentialIKControllerCfg
from omni.isaac.orbit.managers import SceneEntityCfg
from omni.isaac.orbit.markers import VisualizationMarkers
from omni.isaac.orbit.markers.config import FRAME_MARKER_CFG
from omni.isaac.orbit.scene import InteractiveScene, InteractiveSceneCfg
from omni.isaac.orbit.utils import configclass
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
from omni.isaac.orbit.utils.math import subtract_frame_transforms
##
# Pre-defined configs
##
from omni.isaac.orbit_assets import FRANKA_PANDA_HIGH_PD_CFG, UR10_CFG # isort:skip
@configclass
class TableTopSceneCfg(InteractiveSceneCfg):
"""Configuration for a cart-pole scene."""
# ground plane
ground = AssetBaseCfg(
prim_path="/World/defaultGroundPlane",
spawn=sim_utils.GroundPlaneCfg(),
init_state=AssetBaseCfg.InitialStateCfg(pos=(0.0, 0.0, -1.05)),
)
# lights
dome_light = AssetBaseCfg(
prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
)
# mount
table = AssetBaseCfg(
prim_path="{ENV_REGEX_NS}/Table",
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/Stand/stand_instanceable.usd", scale=(2.0, 2.0, 2.0)
),
)
# articulation
if args_cli.robot == "franka_panda":
robot = FRANKA_PANDA_HIGH_PD_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
elif args_cli.robot == "ur10":
robot = UR10_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
else:
raise ValueError(f"Robot {args_cli.robot} is not supported. Valid: franka_panda, ur10")
def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
"""Runs the simulation loop."""
# Extract scene entities
# note: we only do this here for readability.
robot = scene["robot"]
# Create controller
diff_ik_cfg = DifferentialIKControllerCfg(command_type="pose", use_relative_mode=False, ik_method="dls")
diff_ik_controller = DifferentialIKController(diff_ik_cfg, num_envs=scene.num_envs, device=sim.device)
# Markers
frame_marker_cfg = FRAME_MARKER_CFG.copy()
frame_marker_cfg.markers["frame"].scale = (0.1, 0.1, 0.1)
ee_marker = VisualizationMarkers(frame_marker_cfg.replace(prim_path="/Visuals/ee_current"))
goal_marker = VisualizationMarkers(frame_marker_cfg.replace(prim_path="/Visuals/ee_goal"))
# Define goals for the arm
ee_goals = [
[0.5, 0.5, 0.7, 0.707, 0, 0.707, 0],
[0.5, -0.4, 0.6, 0.707, 0.707, 0.0, 0.0],
[0.5, 0, 0.5, 0.0, 1.0, 0.0, 0.0],
]
ee_goals = torch.tensor(ee_goals, device=sim.device)
# Track the given command
current_goal_idx = 0
# Create buffers to store actions
ik_commands = torch.zeros(scene.num_envs, diff_ik_controller.action_dim, device=robot.device)
ik_commands[:] = ee_goals[current_goal_idx]
# Specify robot-specific parameters
if args_cli.robot == "franka_panda":
robot_entity_cfg = SceneEntityCfg("robot", joint_names=["panda_joint.*"], body_names=["panda_hand"])
elif args_cli.robot == "ur10":
robot_entity_cfg = SceneEntityCfg("robot", joint_names=[".*"], body_names=["ee_link"])
else:
raise ValueError(f"Robot {args_cli.robot} is not supported. Valid: franka_panda, ur10")
# Resolving the scene entities
robot_entity_cfg.resolve(scene)
# Obtain the frame index of the end-effector
# For a fixed base robot, the frame index is one less than the body index. This is because
# the root body is not included in the returned Jacobians.
if robot.is_fixed_base:
ee_jacobi_idx = robot_entity_cfg.body_ids[0] - 1
else:
ee_jacobi_idx = robot_entity_cfg.body_ids[0]
# Define simulation stepping
sim_dt = sim.get_physics_dt()
count = 0
# Simulation loop
while simulation_app.is_running():
# reset
if count % 150 == 0:
# reset time
count = 0
# reset joint state
joint_pos = robot.data.default_joint_pos.clone()
joint_vel = robot.data.default_joint_vel.clone()
robot.write_joint_state_to_sim(joint_pos, joint_vel)
robot.reset()
# reset actions
ik_commands[:] = ee_goals[current_goal_idx]
joint_pos_des = joint_pos[:, robot_entity_cfg.joint_ids].clone()
# reset controller
diff_ik_controller.reset()
diff_ik_controller.set_command(ik_commands)
# change goal
current_goal_idx = (current_goal_idx + 1) % len(ee_goals)
else:
# obtain quantities from simulation
jacobian = robot.root_physx_view.get_jacobians()[:, ee_jacobi_idx, :, robot_entity_cfg.joint_ids]
ee_pose_w = robot.data.body_state_w[:, robot_entity_cfg.body_ids[0], 0:7]
root_pose_w = robot.data.root_state_w[:, 0:7]
joint_pos = robot.data.joint_pos[:, robot_entity_cfg.joint_ids]
# compute frame in root frame
ee_pos_b, ee_quat_b = subtract_frame_transforms(
root_pose_w[:, 0:3], root_pose_w[:, 3:7], ee_pose_w[:, 0:3], ee_pose_w[:, 3:7]
)
# compute the joint commands
joint_pos_des = diff_ik_controller.compute(ee_pos_b, ee_quat_b, jacobian, joint_pos)
# apply actions
robot.set_joint_position_target(joint_pos_des, joint_ids=robot_entity_cfg.joint_ids)
scene.write_data_to_sim()
# perform step
sim.step()
# update sim-time
count += 1
# update buffers
scene.update(sim_dt)
# obtain quantities from simulation
ee_pose_w = robot.data.body_state_w[:, robot_entity_cfg.body_ids[0], 0:7]
# update marker positions
ee_marker.visualize(ee_pose_w[:, 0:3], ee_pose_w[:, 3:7])
goal_marker.visualize(ik_commands[:, 0:3] + scene.env_origins, ik_commands[:, 3:7])
def main():
"""Main function."""
# Load kit helper
sim_cfg = sim_utils.SimulationCfg(dt=0.01)
sim = sim_utils.SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view([2.5, 2.5, 2.5], [0.0, 0.0, 0.0])
# Design scene
scene_cfg = TableTopSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
scene = InteractiveScene(scene_cfg)
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 8,021 | Python | 36.311628 | 109 | 0.657399 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/04_sensors/run_usd_camera.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script shows how to use the camera sensor from the Orbit framework.
The camera sensor is created and interfaced through the Omniverse Replicator API. However, instead of using
the simulator or OpenGL convention for the camera, we use the robotics or ROS convention.
.. code-block:: bash
# Usage with GUI
./orbit.sh -p source/standalone/tutorials/04_sensors/run_usd_camera.py
# Usage with headless
./orbit.sh -p source/standalone/tutorials/04_sensors/run_usd_camera.py --headless --offscreen_render
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="This script demonstrates how to use the camera sensor.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU device for camera output.")
parser.add_argument(
"--draw",
action="store_true",
default=False,
help="Draw the pointcloud from camera at index specified by ``--camera_id``.",
)
parser.add_argument(
"--save",
action="store_true",
default=False,
help="Save the data from camera at index specified by ``--camera_id``.",
)
parser.add_argument(
"--camera_id",
type=int,
choices={0, 1},
default=0,
help=(
"The camera ID to use for displaying points or saving the camera data. Default is 0."
" The viewport will always initialize with the perspective of camera 0."
),
)
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import numpy as np
import os
import random
import torch
import omni.isaac.core.utils.prims as prim_utils
import omni.replicator.core as rep
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import RigidObject, RigidObjectCfg
from omni.isaac.orbit.markers import VisualizationMarkers
from omni.isaac.orbit.markers.config import RAY_CASTER_MARKER_CFG
from omni.isaac.orbit.sensors.camera import Camera, CameraCfg
from omni.isaac.orbit.sensors.camera.utils import create_pointcloud_from_depth
from omni.isaac.orbit.utils import convert_dict_to_backend
def define_sensor() -> Camera:
"""Defines the camera sensor to add to the scene."""
# Setup camera sensor
# In contrast to the ray-cast camera, we spawn the prim at these locations.
# This means the camera sensor will be attached to these prims.
prim_utils.create_prim("/World/Origin_00", "Xform")
prim_utils.create_prim("/World/Origin_01", "Xform")
camera_cfg = CameraCfg(
prim_path="/World/Origin_.*/CameraSensor",
update_period=0,
height=480,
width=640,
data_types=[
"rgb",
"distance_to_image_plane",
"normals",
"semantic_segmentation",
"instance_segmentation_fast",
"instance_id_segmentation_fast",
],
colorize_semantic_segmentation=True,
colorize_instance_id_segmentation=True,
colorize_instance_segmentation=True,
spawn=sim_utils.PinholeCameraCfg(
focal_length=24.0, focus_distance=400.0, horizontal_aperture=20.955, clipping_range=(0.1, 1.0e5)
),
)
# Create camera
camera = Camera(cfg=camera_cfg)
return camera
def design_scene() -> dict:
"""Design the scene."""
# Populate scene
# -- Ground-plane
cfg = sim_utils.GroundPlaneCfg()
cfg.func("/World/defaultGroundPlane", cfg)
# -- Lights
cfg = sim_utils.DistantLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
cfg.func("/World/Light", cfg)
# Create a dictionary for the scene entities
scene_entities = {}
# Xform to hold objects
prim_utils.create_prim("/World/Objects", "Xform")
# Random objects
for i in range(8):
# sample random position
position = np.random.rand(3) - np.asarray([0.05, 0.05, -1.0])
position *= np.asarray([1.5, 1.5, 0.5])
# sample random color
color = (random.random(), random.random(), random.random())
# choose random prim type
prim_type = random.choice(["Cube", "Cone", "Cylinder"])
common_properties = {
"rigid_props": sim_utils.RigidBodyPropertiesCfg(),
"mass_props": sim_utils.MassPropertiesCfg(mass=5.0),
"collision_props": sim_utils.CollisionPropertiesCfg(),
"visual_material": sim_utils.PreviewSurfaceCfg(diffuse_color=color, metallic=0.5),
"semantic_tags": [("class", prim_type)],
}
if prim_type == "Cube":
shape_cfg = sim_utils.CuboidCfg(size=(0.25, 0.25, 0.25), **common_properties)
elif prim_type == "Cone":
shape_cfg = sim_utils.ConeCfg(radius=0.1, height=0.25, **common_properties)
elif prim_type == "Cylinder":
shape_cfg = sim_utils.CylinderCfg(radius=0.25, height=0.25, **common_properties)
# Rigid Object
obj_cfg = RigidObjectCfg(
prim_path=f"/World/Objects/Obj_{i:02d}",
spawn=shape_cfg,
init_state=RigidObjectCfg.InitialStateCfg(pos=position),
)
scene_entities[f"rigid_object{i}"] = RigidObject(cfg=obj_cfg)
# Sensors
camera = define_sensor()
# return the scene information
scene_entities["camera"] = camera
return scene_entities
def run_simulator(sim: sim_utils.SimulationContext, scene_entities: dict):
"""Run the simulator."""
# extract entities for simplified notation
camera: Camera = scene_entities["camera"]
# Create replicator writer
output_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "output", "camera")
rep_writer = rep.BasicWriter(
output_dir=output_dir,
frame_padding=0,
colorize_instance_id_segmentation=camera.cfg.colorize_instance_id_segmentation,
colorize_instance_segmentation=camera.cfg.colorize_instance_segmentation,
colorize_semantic_segmentation=camera.cfg.colorize_semantic_segmentation,
)
# Camera positions, targets, orientations
camera_positions = torch.tensor([[2.5, 2.5, 2.5], [-2.5, -2.5, 2.5]], device=sim.device)
camera_targets = torch.tensor([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]], device=sim.device)
# These orientations are in ROS-convention, and will position the cameras to view the origin
camera_orientations = torch.tensor( # noqa: F841
[[-0.1759, 0.3399, 0.8205, -0.4247], [-0.4247, 0.8205, -0.3399, 0.1759]], device=sim.device
)
# Set pose: There are two ways to set the pose of the camera.
# -- Option-1: Set pose using view
camera.set_world_poses_from_view(camera_positions, camera_targets)
# -- Option-2: Set pose using ROS
# camera.set_world_poses(camera_positions, camera_orientations, convention="ros")
# Index of the camera to use for visualization and saving
camera_index = args_cli.camera_id
# Create the markers for the --draw option outside of is_running() loop
if sim.has_gui() and args_cli.draw:
cfg = RAY_CASTER_MARKER_CFG.replace(prim_path="/Visuals/CameraPointCloud")
cfg.markers["hit"].radius = 0.002
pc_markers = VisualizationMarkers(cfg)
# Simulate physics
while simulation_app.is_running():
# Step simulation
sim.step()
# Update camera data
camera.update(dt=sim.get_physics_dt())
# Print camera info
print(camera)
if "rgb" in camera.data.output.keys():
print("Received shape of rgb image : ", camera.data.output["rgb"].shape)
if "distance_to_image_plane" in camera.data.output.keys():
print("Received shape of depth image : ", camera.data.output["distance_to_image_plane"].shape)
if "normals" in camera.data.output.keys():
print("Received shape of normals : ", camera.data.output["normals"].shape)
if "semantic_segmentation" in camera.data.output.keys():
print("Received shape of semantic segm. : ", camera.data.output["semantic_segmentation"].shape)
if "instance_segmentation_fast" in camera.data.output.keys():
print("Received shape of instance segm. : ", camera.data.output["instance_segmentation_fast"].shape)
if "instance_id_segmentation_fast" in camera.data.output.keys():
print("Received shape of instance id segm.: ", camera.data.output["instance_id_segmentation_fast"].shape)
print("-------------------------------")
# Extract camera data
if args_cli.save:
# Save images from camera at camera_index
# note: BasicWriter only supports saving data in numpy format, so we need to convert the data to numpy.
# tensordict allows easy indexing of tensors in the dictionary
single_cam_data = convert_dict_to_backend(camera.data.output[camera_index], backend="numpy")
# Extract the other information
single_cam_info = camera.data.info[camera_index]
# Pack data back into replicator format to save them using its writer
rep_output = dict()
for key, data, info in zip(single_cam_data.keys(), single_cam_data.values(), single_cam_info.values()):
if info is not None:
rep_output[key] = {"data": data, "info": info}
else:
rep_output[key] = data
# Save images
# Note: We need to provide On-time data for Replicator to save the images.
rep_output["trigger_outputs"] = {"on_time": camera.frame[camera_index]}
rep_writer.write(rep_output)
# Draw pointcloud if there is a GUI and --draw has been passed
if sim.has_gui() and args_cli.draw and "distance_to_image_plane" in camera.data.output.keys():
# Derive pointcloud from camera at camera_index
pointcloud = create_pointcloud_from_depth(
intrinsic_matrix=camera.data.intrinsic_matrices[camera_index],
depth=camera.data.output[camera_index]["distance_to_image_plane"],
position=camera.data.pos_w[camera_index],
orientation=camera.data.quat_w_ros[camera_index],
device=sim.device,
)
# In the first few steps, things are still being instanced and Camera.data
# can be empty. If we attempt to visualize an empty pointcloud it will crash
# the sim, so we check that the pointcloud is not empty.
if pointcloud.size()[0] > 0:
pc_markers.visualize(translations=pointcloud)
def main():
"""Main function."""
# Load simulation context
sim_cfg = sim_utils.SimulationCfg(device="cpu" if args_cli.cpu else "cuda")
sim = sim_utils.SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view([2.5, 2.5, 2.5], [0.0, 0.0, 0.0])
# design the scene
scene_entities = design_scene()
# Play simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run simulator
run_simulator(sim, scene_entities)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 11,535 | Python | 38.642612 | 117 | 0.646034 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/04_sensors/run_frame_transformer.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates the FrameTransformer sensor by visualizing the frames that it creates.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/04_sensors/run_frame_transformer.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(
description="This script checks the FrameTransformer sensor by visualizing the frames that it creates."
)
AppLauncher.add_app_launcher_args(parser)
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(headless=args_cli.headless)
simulation_app = app_launcher.app
"""Rest everything follows."""
import math
import torch
import omni.isaac.debug_draw._debug_draw as omni_debug_draw
import omni.isaac.orbit.sim as sim_utils
import omni.isaac.orbit.utils.math as math_utils
from omni.isaac.orbit.assets import Articulation
from omni.isaac.orbit.markers import VisualizationMarkers
from omni.isaac.orbit.markers.config import FRAME_MARKER_CFG
from omni.isaac.orbit.sensors import FrameTransformer, FrameTransformerCfg, OffsetCfg
from omni.isaac.orbit.sim import SimulationContext
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.anymal import ANYMAL_C_CFG # isort:skip
def define_sensor() -> FrameTransformer:
"""Defines the FrameTransformer sensor to add to the scene."""
# define offset
rot_offset = math_utils.quat_from_euler_xyz(torch.zeros(1), torch.zeros(1), torch.tensor(-math.pi / 2))
pos_offset = math_utils.quat_apply(rot_offset, torch.tensor([0.08795, 0.01305, -0.33797]))
# Example using .* to get full body + LF_FOOT
frame_transformer_cfg = FrameTransformerCfg(
prim_path="/World/Robot/base",
target_frames=[
FrameTransformerCfg.FrameCfg(prim_path="/World/Robot/.*"),
FrameTransformerCfg.FrameCfg(
prim_path="/World/Robot/LF_SHANK",
name="LF_FOOT_USER",
offset=OffsetCfg(pos=tuple(pos_offset.tolist()), rot=tuple(rot_offset[0].tolist())),
),
],
debug_vis=False,
)
frame_transformer = FrameTransformer(frame_transformer_cfg)
return frame_transformer
def design_scene() -> dict:
"""Design the scene."""
# Populate scene
# -- Ground-plane
cfg = sim_utils.GroundPlaneCfg()
cfg.func("/World/defaultGroundPlane", cfg)
# -- Lights
cfg = sim_utils.DistantLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
cfg.func("/World/Light", cfg)
# -- Robot
robot = Articulation(ANYMAL_C_CFG.replace(prim_path="/World/Robot"))
# -- Sensors
frame_transformer = define_sensor()
# return the scene information
scene_entities = {"robot": robot, "frame_transformer": frame_transformer}
return scene_entities
def run_simulator(sim: sim_utils.SimulationContext, scene_entities: dict):
"""Run the simulator."""
# Define simulation stepping
sim_dt = sim.get_physics_dt()
sim_time = 0.0
count = 0
# extract entities for simplified notation
robot: Articulation = scene_entities["robot"]
frame_transformer: FrameTransformer = scene_entities["frame_transformer"]
# We only want one visualization at a time. This visualizer will be used
# to step through each frame so the user can verify that the correct frame
# is being visualized as the frame names are printing to console
if not args_cli.headless:
cfg = FRAME_MARKER_CFG.replace(prim_path="/Visuals/FrameVisualizerFromScript")
cfg.markers["frame"].scale = (0.1, 0.1, 0.1)
transform_visualizer = VisualizationMarkers(cfg)
# debug drawing for lines connecting the frame
draw_interface = omni_debug_draw.acquire_debug_draw_interface()
else:
transform_visualizer = None
draw_interface = None
frame_index = 0
# Simulate physics
while simulation_app.is_running():
# perform this loop at policy control freq (50 Hz)
robot.set_joint_position_target(robot.data.default_joint_pos.clone())
robot.write_data_to_sim()
# perform step
sim.step()
# update sim-time
sim_time += sim_dt
count += 1
# read data from sim
robot.update(sim_dt)
frame_transformer.update(dt=sim_dt)
# Change the frame that we are visualizing to ensure that frame names
# are correctly associated with the frames
if not args_cli.headless:
if count % 50 == 0:
# get frame names
frame_names = frame_transformer.data.target_frame_names
print(f"Displaying Frame ID {frame_index}: {frame_names[frame_index]}")
# increment frame index
frame_index += 1
frame_index = frame_index % len(frame_names)
# visualize frame
source_pos = frame_transformer.data.source_pos_w
source_quat = frame_transformer.data.source_quat_w
target_pos = frame_transformer.data.target_pos_w[:, frame_index]
target_quat = frame_transformer.data.target_quat_w[:, frame_index]
# draw the frames
transform_visualizer.visualize(
torch.cat([source_pos, target_pos], dim=0), torch.cat([source_quat, target_quat], dim=0)
)
# draw the line connecting the frames
draw_interface.clear_lines()
# plain color for lines
lines_colors = [[1.0, 1.0, 0.0, 1.0]] * source_pos.shape[0]
line_thicknesses = [5.0] * source_pos.shape[0]
draw_interface.draw_lines(source_pos.tolist(), target_pos.tolist(), lines_colors, line_thicknesses)
def main():
"""Main function."""
# Load kit helper
sim = SimulationContext(sim_utils.SimulationCfg(dt=0.005))
# Set main camera
sim.set_camera_view(eye=[2.5, 2.5, 2.5], target=[0.0, 0.0, 0.0])
# Design the scene
scene_entities = design_scene()
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene_entities)
if __name__ == "__main__":
# Run the main function
main()
# Close the simulator
simulation_app.close()
| 6,473 | Python | 33.43617 | 111 | 0.657346 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/04_sensors/run_ray_caster_camera.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script shows how to use the ray-cast camera sensor from the Orbit framework.
The camera sensor is based on using Warp kernels which do ray-casting against static meshes.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/04_sensors/run_ray_caster_camera.py
"""
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="This script demonstrates how to use the ray-cast camera sensor.")
parser.add_argument("--num_envs", type=int, default=16, help="Number of environments to generate.")
parser.add_argument("--save", action="store_true", default=False, help="Save the obtained data to disk.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import os
import torch
import omni.isaac.core.utils.prims as prim_utils
import omni.replicator.core as rep
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.sensors.ray_caster import RayCasterCamera, RayCasterCameraCfg, patterns
from omni.isaac.orbit.utils import convert_dict_to_backend
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
from omni.isaac.orbit.utils.math import project_points, unproject_depth
def define_sensor() -> RayCasterCamera:
"""Defines the ray-cast camera sensor to add to the scene."""
# Camera base frames
# In contras to the USD camera, we associate the sensor to the prims at these locations.
# This means that parent prim of the sensor is the prim at this location.
prim_utils.create_prim("/World/Origin_00/CameraSensor", "Xform")
prim_utils.create_prim("/World/Origin_01/CameraSensor", "Xform")
# Setup camera sensor
camera_cfg = RayCasterCameraCfg(
prim_path="/World/Origin_.*/CameraSensor",
mesh_prim_paths=["/World/ground"],
update_period=0.1,
offset=RayCasterCameraCfg.OffsetCfg(pos=(0.0, 0.0, 0.0), rot=(1.0, 0.0, 0.0, 0.0)),
data_types=["distance_to_image_plane", "normals", "distance_to_camera"],
debug_vis=True,
pattern_cfg=patterns.PinholeCameraPatternCfg(
focal_length=24.0,
horizontal_aperture=20.955,
height=480,
width=640,
),
)
# Create camera
camera = RayCasterCamera(cfg=camera_cfg)
return camera
def design_scene():
# Populate scene
# -- Rough terrain
cfg = sim_utils.UsdFileCfg(usd_path=f"{ISAAC_NUCLEUS_DIR}/Environments/Terrains/rough_plane.usd")
cfg.func("/World/ground", cfg)
# -- Lights
cfg = sim_utils.DistantLightCfg(intensity=600.0, color=(0.75, 0.75, 0.75))
cfg.func("/World/Light", cfg)
# -- Sensors
camera = define_sensor()
# return the scene information
scene_entities = {"camera": camera}
return scene_entities
def run_simulator(sim: sim_utils.SimulationContext, scene_entities: dict):
"""Run the simulator."""
# extract entities for simplified notation
camera: RayCasterCamera = scene_entities["camera"]
# Create replicator writer
output_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "output", "ray_caster_camera")
rep_writer = rep.BasicWriter(output_dir=output_dir, frame_padding=3)
# Set pose: There are two ways to set the pose of the camera.
# -- Option-1: Set pose using view
eyes = torch.tensor([[2.5, 2.5, 2.5], [-2.5, -2.5, 2.5]], device=sim.device)
targets = torch.tensor([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]], device=sim.device)
camera.set_world_poses_from_view(eyes, targets)
# -- Option-2: Set pose using ROS
# position = torch.tensor([[2.5, 2.5, 2.5]], device=sim.device)
# orientation = torch.tensor([[-0.17591989, 0.33985114, 0.82047325, -0.42470819]], device=sim.device)
# camera.set_world_poses(position, orientation, indices=[0], convention="ros")
# Simulate physics
while simulation_app.is_running():
# Step simulation
sim.step()
# Update camera data
camera.update(dt=sim.get_physics_dt())
# Print camera info
print(camera)
print("Received shape of depth image: ", camera.data.output["distance_to_image_plane"].shape)
print("-------------------------------")
# Extract camera data
if args_cli.save:
# Extract camera data
camera_index = 0
# note: BasicWriter only supports saving data in numpy format, so we need to convert the data to numpy.
if sim.backend == "torch":
# tensordict allows easy indexing of tensors in the dictionary
single_cam_data = convert_dict_to_backend(camera.data.output[camera_index], backend="numpy")
else:
# for numpy, we need to manually index the data
single_cam_data = dict()
for key, value in camera.data.output.items():
single_cam_data[key] = value[camera_index]
# Extract the other information
single_cam_info = camera.data.info[camera_index]
# Pack data back into replicator format to save them using its writer
rep_output = dict()
for key, data, info in zip(single_cam_data.keys(), single_cam_data.values(), single_cam_info.values()):
if info is not None:
rep_output[key] = {"data": data, "info": info}
else:
rep_output[key] = data
# Save images
rep_output["trigger_outputs"] = {"on_time": camera.frame[camera_index]}
rep_writer.write(rep_output)
# Pointcloud in world frame
points_3d_cam = unproject_depth(
camera.data.output["distance_to_image_plane"], camera.data.intrinsic_matrices
)
# Check methods are valid
im_height, im_width = camera.image_shape
# -- project points to (u, v, d)
reproj_points = project_points(points_3d_cam, camera.data.intrinsic_matrices)
reproj_depths = reproj_points[..., -1].view(-1, im_width, im_height).transpose_(1, 2)
sim_depths = camera.data.output["distance_to_image_plane"].squeeze(-1)
torch.testing.assert_close(reproj_depths, sim_depths)
def main():
"""Main function."""
# Load kit helper
sim = sim_utils.SimulationContext()
# Set main camera
sim.set_camera_view([2.5, 2.5, 3.5], [0.0, 0.0, 0.0])
# design the scene
scene_entities = design_scene()
# Play simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run simulator
run_simulator(sim=sim, scene_entities=scene_entities)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 7,126 | Python | 36.708995 | 115 | 0.642015 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/04_sensors/add_sensors_on_robot.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates how to add and simulate on-board sensors for a robot.
We add the following sensors on the quadruped robot, ANYmal-C (ANYbotics):
* USD-Camera: This is a camera sensor that is attached to the robot's base.
* Height Scanner: This is a height scanner sensor that is attached to the robot's base.
* Contact Sensor: This is a contact sensor that is attached to the robot's feet.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/04_sensors/add_sensors_on_robot.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Tutorial on adding sensors on a robot.")
parser.add_argument("--num_envs", type=int, default=2, help="Number of environments to spawn.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import torch
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import ArticulationCfg, AssetBaseCfg
from omni.isaac.orbit.scene import InteractiveScene, InteractiveSceneCfg
from omni.isaac.orbit.sensors import CameraCfg, ContactSensorCfg, RayCasterCfg, patterns
from omni.isaac.orbit.utils import configclass
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.anymal import ANYMAL_C_CFG # isort: skip
@configclass
class SensorsSceneCfg(InteractiveSceneCfg):
"""Design the scene with sensors on the robot."""
# ground plane
ground = AssetBaseCfg(prim_path="/World/defaultGroundPlane", spawn=sim_utils.GroundPlaneCfg())
# lights
dome_light = AssetBaseCfg(
prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
)
# robot
robot: ArticulationCfg = ANYMAL_C_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# sensors
camera = CameraCfg(
prim_path="{ENV_REGEX_NS}/Robot/base/front_cam",
update_period=0.1,
height=480,
width=640,
data_types=["rgb", "distance_to_image_plane"],
spawn=sim_utils.PinholeCameraCfg(
focal_length=24.0, focus_distance=400.0, horizontal_aperture=20.955, clipping_range=(0.1, 1.0e5)
),
offset=CameraCfg.OffsetCfg(pos=(0.510, 0.0, 0.015), rot=(0.5, -0.5, 0.5, -0.5), convention="ros"),
)
height_scanner = RayCasterCfg(
prim_path="{ENV_REGEX_NS}/Robot/base",
update_period=0.02,
offset=RayCasterCfg.OffsetCfg(pos=(0.0, 0.0, 20.0)),
attach_yaw_only=True,
pattern_cfg=patterns.GridPatternCfg(resolution=0.1, size=[1.6, 1.0]),
debug_vis=True,
mesh_prim_paths=["/World/defaultGroundPlane"],
)
contact_forces = ContactSensorCfg(
prim_path="{ENV_REGEX_NS}/Robot/.*_FOOT", update_period=0.0, history_length=6, debug_vis=True
)
def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
"""Run the simulator."""
# Define simulation stepping
sim_dt = sim.get_physics_dt()
sim_time = 0.0
count = 0
# Simulate physics
while simulation_app.is_running():
# Reset
if count % 500 == 0:
# reset counter
count = 0
# reset the scene entities
# root state
# we offset the root state by the origin since the states are written in simulation world frame
# if this is not done, then the robots will be spawned at the (0, 0, 0) of the simulation world
root_state = scene["robot"].data.default_root_state.clone()
root_state[:, :3] += scene.env_origins
scene["robot"].write_root_state_to_sim(root_state)
# set joint positions with some noise
joint_pos, joint_vel = (
scene["robot"].data.default_joint_pos.clone(),
scene["robot"].data.default_joint_vel.clone(),
)
joint_pos += torch.rand_like(joint_pos) * 0.1
scene["robot"].write_joint_state_to_sim(joint_pos, joint_vel)
# clear internal buffers
scene.reset()
print("[INFO]: Resetting robot state...")
# Apply default actions to the robot
# -- generate actions/commands
targets = scene["robot"].data.default_joint_pos
# -- apply action to the robot
scene["robot"].set_joint_position_target(targets)
# -- write data to sim
scene.write_data_to_sim()
# perform step
sim.step()
# update sim-time
sim_time += sim_dt
count += 1
# update buffers
scene.update(sim_dt)
# print information from the sensors
print("-------------------------------")
print(scene["camera"])
print("Received shape of rgb image: ", scene["camera"].data.output["rgb"].shape)
print("Received shape of depth image: ", scene["camera"].data.output["distance_to_image_plane"].shape)
print("-------------------------------")
print(scene["height_scanner"])
print("Received max height value: ", torch.max(scene["height_scanner"].data.ray_hits_w[..., -1]).item())
print("-------------------------------")
print(scene["contact_forces"])
print("Received max contact force of: ", torch.max(scene["contact_forces"].data.net_forces_w).item())
def main():
"""Main function."""
# Initialize the simulation context
sim_cfg = sim_utils.SimulationCfg(dt=0.005, substeps=1)
sim = sim_utils.SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view(eye=[3.5, 3.5, 3.5], target=[0.0, 0.0, 0.0])
# design scene
scene_cfg = SensorsSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
scene = InteractiveScene(scene_cfg)
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 6,365 | Python | 33.978022 | 112 | 0.635192 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/04_sensors/run_ray_caster.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates how to use the ray-caster sensor.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/04_sensors/run_ray_caster.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Ray Caster Test Script")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import torch
import omni.isaac.core.utils.prims as prim_utils
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import RigidObject, RigidObjectCfg
from omni.isaac.orbit.sensors.ray_caster import RayCaster, RayCasterCfg, patterns
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
from omni.isaac.orbit.utils.timer import Timer
def define_sensor() -> RayCaster:
"""Defines the ray-caster sensor to add to the scene."""
# Create a ray-caster sensor
ray_caster_cfg = RayCasterCfg(
prim_path="/World/Origin.*/ball",
mesh_prim_paths=["/World/ground"],
pattern_cfg=patterns.GridPatternCfg(resolution=0.1, size=(2.0, 2.0)),
attach_yaw_only=True,
debug_vis=not args_cli.headless,
)
ray_caster = RayCaster(cfg=ray_caster_cfg)
return ray_caster
def design_scene() -> dict:
"""Design the scene."""
# Populate scene
# -- Rough terrain
cfg = sim_utils.UsdFileCfg(usd_path=f"{ISAAC_NUCLEUS_DIR}/Environments/Terrains/rough_plane.usd")
cfg.func("/World/ground", cfg)
# -- Light
cfg = sim_utils.DistantLightCfg(intensity=2000)
cfg.func("/World/light", cfg)
# Create separate groups called "Origin1", "Origin2", "Origin3"
# Each group will have a robot in it
origins = [[0.25, 0.25, 0.0], [-0.25, 0.25, 0.0], [0.25, -0.25, 0.0], [-0.25, -0.25, 0.0]]
for i, origin in enumerate(origins):
prim_utils.create_prim(f"/World/Origin{i}", "Xform", translation=origin)
# -- Balls
cfg = RigidObjectCfg(
prim_path="/World/Origin.*/ball",
spawn=sim_utils.SphereCfg(
radius=0.25,
rigid_props=sim_utils.RigidBodyPropertiesCfg(),
mass_props=sim_utils.MassPropertiesCfg(mass=0.5),
collision_props=sim_utils.CollisionPropertiesCfg(),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 0.0, 1.0)),
),
)
balls = RigidObject(cfg)
# -- Sensors
ray_caster = define_sensor()
# return the scene information
scene_entities = {"balls": balls, "ray_caster": ray_caster}
return scene_entities
def run_simulator(sim: sim_utils.SimulationContext, scene_entities: dict):
"""Run the simulator."""
# Extract scene_entities for simplified notation
ray_caster: RayCaster = scene_entities["ray_caster"]
balls: RigidObject = scene_entities["balls"]
# define an initial position of the sensor
ball_default_state = balls.data.default_root_state.clone()
ball_default_state[:, :3] = torch.rand_like(ball_default_state[:, :3]) * 10
# Create a counter for resetting the scene
step_count = 0
# Simulate physics
while simulation_app.is_running():
# Reset the scene
if step_count % 250 == 0:
# reset the balls
balls.write_root_state_to_sim(ball_default_state)
# reset the sensor
ray_caster.reset()
# reset the counter
step_count = 0
# Step simulation
sim.step()
# Update the ray-caster
with Timer(
f"Ray-caster update with {4} x {ray_caster.num_rays} rays with max height of"
f" {torch.max(ray_caster.data.pos_w).item():.2f}"
):
ray_caster.update(dt=sim.get_physics_dt(), force_recompute=True)
# Update counter
step_count += 1
def main():
"""Main function."""
# Load simulation context
sim_cfg = sim_utils.SimulationCfg()
sim = sim_utils.SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view([0.0, 15.0, 15.0], [0.0, 0.0, -2.5])
# Design the scene
scene_entities = design_scene()
# Play simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run simulator
run_simulator(sim=sim, scene_entities=scene_entities)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 4,764 | Python | 30.143791 | 101 | 0.649664 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/00_sim/launch_app.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates how to run IsaacSim via the AppLauncher
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/00_sim/launch_app.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# create argparser
parser = argparse.ArgumentParser(description="Tutorial on running IsaacSim via the AppLauncher.")
parser.add_argument("--size", type=float, default=1.0, help="Side-length of cuboid")
# SimulationApp arguments https://docs.omniverse.nvidia.com/py/isaacsim/source/extensions/omni.isaac.kit/docs/index.html?highlight=simulationapp#omni.isaac.kit.SimulationApp
parser.add_argument(
"--width", type=int, default=1280, help="Width of the viewport and generated images. Defaults to 1280"
)
parser.add_argument(
"--height", type=int, default=720, help="Height of the viewport and generated images. Defaults to 720"
)
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import omni.isaac.orbit.sim as sim_utils
def design_scene():
"""Designs the scene by spawning ground plane, light, objects and meshes from usd files."""
# Ground-plane
cfg_ground = sim_utils.GroundPlaneCfg()
cfg_ground.func("/World/defaultGroundPlane", cfg_ground)
# spawn distant light
cfg_light_distant = sim_utils.DistantLightCfg(
intensity=3000.0,
color=(0.75, 0.75, 0.75),
)
cfg_light_distant.func("/World/lightDistant", cfg_light_distant, translation=(1, 0, 10))
# spawn a cuboid
cfg_cuboid = sim_utils.CuboidCfg(
size=[args_cli.size] * 3,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 1.0, 1.0)),
)
# Spawn cuboid, altering translation on the z-axis to scale to its size
cfg_cuboid.func("/World/Object", cfg_cuboid, translation=(0.0, 0.0, args_cli.size / 2))
def main():
"""Main function."""
# Initialize the simulation context
sim_cfg = sim_utils.SimulationCfg(dt=0.01, substeps=1)
sim = sim_utils.SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view([2.0, 0.0, 2.5], [-0.5, 0.0, 0.5])
# Design scene by adding assets to it
design_scene()
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Simulate physics
while simulation_app.is_running():
# perform step
sim.step()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 2,854 | Python | 28.432989 | 173 | 0.689909 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/00_sim/create_empty.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This script demonstrates how to create a simple stage in Isaac Sim.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/00_sim/create_empty.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# create argparser
parser = argparse.ArgumentParser(description="Tutorial on creating an empty stage.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
from omni.isaac.orbit.sim import SimulationCfg, SimulationContext
def main():
"""Main function."""
# Initialize the simulation context
sim_cfg = SimulationCfg(dt=0.01, substeps=1)
sim = SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view([2.5, 2.5, 2.5], [0.0, 0.0, 0.0])
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Simulate physics
while simulation_app.is_running():
# perform step
sim.step()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 1,436 | Python | 22.177419 | 84 | 0.685933 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/00_sim/spawn_prims.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This script demonstrates how to spawn prims into the scene.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/00_sim/spawn_prims.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# create argparser
parser = argparse.ArgumentParser(description="Tutorial on spawning prims into the scene.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import omni.isaac.core.utils.prims as prim_utils
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
def design_scene():
"""Designs the scene by spawning ground plane, light, objects and meshes from usd files."""
# Ground-plane
cfg_ground = sim_utils.GroundPlaneCfg()
cfg_ground.func("/World/defaultGroundPlane", cfg_ground)
# spawn distant light
cfg_light_distant = sim_utils.DistantLightCfg(
intensity=3000.0,
color=(0.75, 0.75, 0.75),
)
cfg_light_distant.func("/World/lightDistant", cfg_light_distant, translation=(1, 0, 10))
# create a new xform prim for all objects to be spawned under
prim_utils.create_prim("/World/Objects", "Xform")
# spawn a red cone
cfg_cone = sim_utils.ConeCfg(
radius=0.15,
height=0.5,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.0, 0.0)),
)
cfg_cone.func("/World/Objects/Cone1", cfg_cone, translation=(-1.0, 1.0, 1.0))
cfg_cone.func("/World/Objects/Cone2", cfg_cone, translation=(-1.0, -1.0, 1.0))
# spawn a green cone with colliders and rigid body
cfg_cone_rigid = sim_utils.ConeCfg(
radius=0.15,
height=0.5,
rigid_props=sim_utils.RigidBodyPropertiesCfg(),
mass_props=sim_utils.MassPropertiesCfg(mass=1.0),
collision_props=sim_utils.CollisionPropertiesCfg(),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 1.0, 0.0)),
)
cfg_cone_rigid.func(
"/World/Objects/ConeRigid", cfg_cone_rigid, translation=(0.0, 0.0, 2.0), orientation=(0.5, 0.0, 0.5, 0.0)
)
# spawn a usd file of a table into the scene
cfg = sim_utils.UsdFileCfg(usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/SeattleLabTable/table_instanceable.usd")
cfg.func("/World/Objects/Table", cfg, translation=(0.0, 0.0, 1.05))
def main():
"""Main function."""
# Initialize the simulation context
sim_cfg = sim_utils.SimulationCfg(dt=0.01, substeps=1)
sim = sim_utils.SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view([2.0, 0.0, 2.5], [-0.5, 0.0, 0.5])
# Design scene by adding assets to it
design_scene()
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Simulate physics
while simulation_app.is_running():
# perform step
sim.step()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 3,338 | Python | 29.354545 | 115 | 0.670761 |
NVIDIA-Omniverse/orbit/source/standalone/tutorials/00_sim/log_time.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates how to generate log outputs while the simulation plays.
It accompanies the tutorial on docker usage.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/tutorials/00_sim/log_time.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
import os
from omni.isaac.orbit.app import AppLauncher
# create argparser
parser = argparse.ArgumentParser(description="Tutorial on creating logs from within the docker container.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
from omni.isaac.orbit.sim import SimulationCfg, SimulationContext
def main():
"""Main function."""
# Specify that the logs must be in logs/docker_tutorial
log_dir_path = os.path.join("logs", "docker_tutorial")
# In the container, the absolute path will be
# /workspace/orbit/logs/docker_tutorial, because
# all python execution is done through /workspace/orbit/orbit.sh
# and the calling process' path will be /workspace/orbit
log_dir_path = os.path.abspath(log_dir_path)
if not os.path.isdir(log_dir_path):
os.mkdir(log_dir_path)
print(f"[INFO] Logging experiment to directory: {log_dir_path}")
# Initialize the simulation context
sim_cfg = SimulationCfg(dt=0.01, substeps=1)
sim = SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view([2.5, 2.5, 2.5], [0.0, 0.0, 0.0])
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Prepare to count sim_time
sim_dt = sim.get_physics_dt()
sim_time = 0.0
# Open logging file
with open(os.path.join(log_dir_path, "log.txt"), "w") as log_file:
# Simulate physics
while simulation_app.is_running():
log_file.write(f"{sim_time}" + "\n")
# perform step
sim.step()
sim_time += sim_dt
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 2,342 | Python | 27.228915 | 107 | 0.673356 |
NVIDIA-Omniverse/orbit/source/standalone/demos/markers.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""This script demonstrates different types of markers.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/demos/markers.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="This script demonstrates different types of markers.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import torch
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.markers import VisualizationMarkers, VisualizationMarkersCfg
from omni.isaac.orbit.sim import SimulationContext
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR, ISAAC_ORBIT_NUCLEUS_DIR
from omni.isaac.orbit.utils.math import quat_from_angle_axis
def define_markers() -> VisualizationMarkers:
"""Define markers with various different shapes."""
marker_cfg = VisualizationMarkersCfg(
prim_path="/Visuals/myMarkers",
markers={
"frame": sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/UIElements/frame_prim.usd",
scale=(0.5, 0.5, 0.5),
),
"arrow_x": sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/UIElements/arrow_x.usd",
scale=(1.0, 0.5, 0.5),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 1.0, 1.0)),
),
"cube": sim_utils.CuboidCfg(
size=(1.0, 1.0, 1.0),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.0, 0.0)),
),
"sphere": sim_utils.SphereCfg(
radius=0.5,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 1.0, 0.0)),
),
"cylinder": sim_utils.CylinderCfg(
radius=0.5,
height=1.0,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 0.0, 1.0)),
),
"cone": sim_utils.ConeCfg(
radius=0.5,
height=1.0,
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 1.0, 0.0)),
),
"mesh": sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Blocks/DexCube/dex_cube_instanceable.usd",
scale=(10.0, 10.0, 10.0),
),
"mesh_recolored": sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Blocks/DexCube/dex_cube_instanceable.usd",
scale=(10.0, 10.0, 10.0),
visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.25, 0.0)),
),
"robot_mesh": sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_ORBIT_NUCLEUS_DIR}/Robots/ANYbotics/ANYmal-D/anymal_d.usd",
scale=(2.0, 2.0, 2.0),
visual_material=sim_utils.GlassMdlCfg(glass_color=(0.0, 0.1, 0.0)),
),
},
)
return VisualizationMarkers(marker_cfg)
def main():
"""Main function."""
# Load kit helper
sim = SimulationContext(sim_utils.SimulationCfg(dt=0.01, substeps=1))
# Set main camera
sim.set_camera_view([0.0, 18.0, 12.0], [0.0, 3.0, 0.0])
# Spawn things into stage
# Lights
cfg = sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
cfg.func("/World/Light", cfg)
# create markers
my_visualizer = define_markers()
# define a grid of positions where the markers should be placed
num_markers_per_type = 5
grid_spacing = 2.0
# Calculate the half-width and half-height
half_width = (num_markers_per_type - 1) / 2.0
half_height = (my_visualizer.num_prototypes - 1) / 2.0
# Create the x and y ranges centered around the origin
x_range = torch.arange(-half_width * grid_spacing, (half_width + 1) * grid_spacing, grid_spacing)
y_range = torch.arange(-half_height * grid_spacing, (half_height + 1) * grid_spacing, grid_spacing)
# Create the grid
x_grid, y_grid = torch.meshgrid(x_range, y_range, indexing="ij")
x_grid = x_grid.reshape(-1)
y_grid = y_grid.reshape(-1)
z_grid = torch.zeros_like(x_grid)
# marker locations
marker_locations = torch.stack([x_grid, y_grid, z_grid], dim=1)
marker_indices = torch.arange(my_visualizer.num_prototypes).repeat(num_markers_per_type)
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Yaw angle
yaw = torch.zeros_like(marker_locations[:, 0])
# Simulate physics
while simulation_app.is_running():
# rotate the markers around the z-axis for visualization
marker_orientations = quat_from_angle_axis(yaw, torch.tensor([0.0, 0.0, 1.0]))
# visualize
my_visualizer.visualize(marker_locations, marker_orientations, marker_indices=marker_indices)
# roll corresponding indices to show how marker prototype can be changed
if yaw[0].item() % (0.5 * torch.pi) < 0.01:
marker_indices = torch.roll(marker_indices, 1)
# perform step
sim.step()
# increment yaw
yaw += 0.01
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 5,641 | Python | 34.936306 | 103 | 0.617089 |
NVIDIA-Omniverse/orbit/source/standalone/demos/hands.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates different dexterous hands.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/demos/hands.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="This script demonstrates different dexterous hands.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import numpy as np
import torch
import omni.isaac.core.utils.prims as prim_utils
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import Articulation
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.allegro import ALLEGRO_HAND_CFG # isort:skip
from omni.isaac.orbit_assets.shadow_hand import SHADOW_HAND_CFG # isort:skip
def define_origins(num_origins: int, spacing: float) -> list[list[float]]:
"""Defines the origins of the the scene."""
# create tensor based on number of environments
env_origins = torch.zeros(num_origins, 3)
# create a grid of origins
num_cols = np.floor(np.sqrt(num_origins))
num_rows = np.ceil(num_origins / num_cols)
xx, yy = torch.meshgrid(torch.arange(num_rows), torch.arange(num_cols), indexing="xy")
env_origins[:, 0] = spacing * xx.flatten()[:num_origins] - spacing * (num_rows - 1) / 2
env_origins[:, 1] = spacing * yy.flatten()[:num_origins] - spacing * (num_cols - 1) / 2
env_origins[:, 2] = 0.0
# return the origins
return env_origins.tolist()
def design_scene() -> tuple[dict, list[list[float]]]:
"""Designs the scene."""
# Ground-plane
cfg = sim_utils.GroundPlaneCfg()
cfg.func("/World/defaultGroundPlane", cfg)
# Lights
cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.75, 0.75, 0.75))
cfg.func("/World/Light", cfg)
# Create separate groups called "Origin1", "Origin2", "Origin3"
# Each group will have a mount and a robot on top of it
origins = define_origins(num_origins=2, spacing=0.5)
# Origin 1 with Allegro Hand
prim_utils.create_prim("/World/Origin1", "Xform", translation=origins[0])
# -- Robot
allegro = Articulation(ALLEGRO_HAND_CFG.replace(prim_path="/World/Origin1/Robot"))
# Origin 2 with Shadow Hand
prim_utils.create_prim("/World/Origin2", "Xform", translation=origins[1])
# -- Robot
shadow_hand = Articulation(SHADOW_HAND_CFG.replace(prim_path="/World/Origin2/Robot"))
# return the scene information
scene_entities = {
"allegro": allegro,
"shadow_hand": shadow_hand,
}
return scene_entities, origins
def run_simulator(sim: sim_utils.SimulationContext, entities: dict[str, Articulation], origins: torch.Tensor):
"""Runs the simulation loop."""
# Define simulation stepping
sim_dt = sim.get_physics_dt()
sim_time = 0.0
count = 0
# Start with hand open
grasp_mode = 0
# Simulate physics
while simulation_app.is_running():
# reset
if count % 1000 == 0:
# reset counters
sim_time = 0.0
count = 0
# reset robots
for index, robot in enumerate(entities.values()):
# root state
root_state = robot.data.default_root_state.clone()
root_state[:, :3] += origins[index]
robot.write_root_state_to_sim(root_state)
# joint state
joint_pos, joint_vel = robot.data.default_joint_pos.clone(), robot.data.default_joint_vel.clone()
robot.write_joint_state_to_sim(joint_pos, joint_vel)
# reset the internal state
robot.reset()
print("[INFO]: Resetting robots state...")
# toggle grasp mode
if count % 100 == 0:
grasp_mode = 1 - grasp_mode
# apply default actions to the hands robots
for robot in entities.values():
# generate joint positions
joint_pos_target = robot.data.soft_joint_pos_limits[..., grasp_mode]
# apply action to the robot
robot.set_joint_position_target(joint_pos_target)
# write data to sim
robot.write_data_to_sim()
# perform step
sim.step()
# update sim-time
sim_time += sim_dt
count += 1
# update buffers
for robot in entities.values():
robot.update(sim_dt)
def main():
"""Main function."""
# Initialize the simulation context
sim = sim_utils.SimulationContext(sim_utils.SimulationCfg(dt=0.01, substeps=1))
# Set main camera
sim.set_camera_view(eye=[0.0, -0.5, 1.5], target=[0.0, -0.2, 0.5])
# design scene
scene_entities, scene_origins = design_scene()
scene_origins = torch.tensor(scene_origins, device=sim.device)
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene_entities, scene_origins)
if __name__ == "__main__":
# run the main execution
main()
# close sim app
simulation_app.close()
| 5,446 | Python | 31.041176 | 113 | 0.637716 |
NVIDIA-Omniverse/orbit/source/standalone/demos/arms.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates different single-arm manipulators.
.. code-block:: bash
# Usage
./orbit.sh -p source/standalone/demos/arms.py
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="This script demonstrates different single-arm manipulators.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import numpy as np
import torch
import omni.isaac.core.utils.prims as prim_utils
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import Articulation
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
##
# Pre-defined configs
##
# isort: off
from omni.isaac.orbit_assets import (
FRANKA_PANDA_CFG,
UR10_CFG,
KINOVA_JACO2_N7S300_CFG,
KINOVA_JACO2_N6S300_CFG,
KINOVA_GEN3_N7_CFG,
SAWYER_CFG,
)
# isort: on
def define_origins(num_origins: int, spacing: float) -> list[list[float]]:
"""Defines the origins of the the scene."""
# create tensor based on number of environments
env_origins = torch.zeros(num_origins, 3)
# create a grid of origins
num_rows = np.floor(np.sqrt(num_origins))
num_cols = np.ceil(num_origins / num_rows)
xx, yy = torch.meshgrid(torch.arange(num_rows), torch.arange(num_cols), indexing="xy")
env_origins[:, 0] = spacing * xx.flatten()[:num_origins] - spacing * (num_rows - 1) / 2
env_origins[:, 1] = spacing * yy.flatten()[:num_origins] - spacing * (num_cols - 1) / 2
env_origins[:, 2] = 0.0
# return the origins
return env_origins.tolist()
def design_scene() -> tuple[dict, list[list[float]]]:
"""Designs the scene."""
# Ground-plane
cfg = sim_utils.GroundPlaneCfg()
cfg.func("/World/defaultGroundPlane", cfg)
# Lights
cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.75, 0.75, 0.75))
cfg.func("/World/Light", cfg)
# Create separate groups called "Origin1", "Origin2", "Origin3"
# Each group will have a mount and a robot on top of it
origins = define_origins(num_origins=6, spacing=2.0)
# Origin 1 with Franka Panda
prim_utils.create_prim("/World/Origin1", "Xform", translation=origins[0])
# -- Table
cfg = sim_utils.UsdFileCfg(usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/SeattleLabTable/table_instanceable.usd")
cfg.func("/World/Origin1/Table", cfg, translation=(0.55, 0.0, 1.05))
# -- Robot
franka_arm_cfg = FRANKA_PANDA_CFG.replace(prim_path="/World/Origin1/Robot")
franka_arm_cfg.init_state.pos = (0.0, 0.0, 1.05)
franka_panda = Articulation(cfg=franka_arm_cfg)
# Origin 2 with UR10
prim_utils.create_prim("/World/Origin2", "Xform", translation=origins[1])
# -- Table
cfg = sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/Stand/stand_instanceable.usd", scale=(2.0, 2.0, 2.0)
)
cfg.func("/World/Origin2/Table", cfg, translation=(0.0, 0.0, 1.03))
# -- Robot
ur10_cfg = UR10_CFG.replace(prim_path="/World/Origin2/Robot")
ur10_cfg.init_state.pos = (0.0, 0.0, 1.03)
ur10 = Articulation(cfg=ur10_cfg)
# Origin 3 with Kinova JACO2 (7-Dof) arm
prim_utils.create_prim("/World/Origin3", "Xform", translation=origins[2])
# -- Table
cfg = sim_utils.UsdFileCfg(usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/ThorlabsTable/table_instanceable.usd")
cfg.func("/World/Origin3/Table", cfg, translation=(0.0, 0.0, 0.8))
# -- Robot
kinova_arm_cfg = KINOVA_JACO2_N7S300_CFG.replace(prim_path="/World/Origin3/Robot")
kinova_arm_cfg.init_state.pos = (0.0, 0.0, 0.8)
kinova_j2n7s300 = Articulation(cfg=kinova_arm_cfg)
# Origin 4 with Kinova JACO2 (6-Dof) arm
prim_utils.create_prim("/World/Origin4", "Xform", translation=origins[3])
# -- Table
cfg = sim_utils.UsdFileCfg(usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/ThorlabsTable/table_instanceable.usd")
cfg.func("/World/Origin4/Table", cfg, translation=(0.0, 0.0, 0.8))
# -- Robot
kinova_arm_cfg = KINOVA_JACO2_N6S300_CFG.replace(prim_path="/World/Origin4/Robot")
kinova_arm_cfg.init_state.pos = (0.0, 0.0, 0.8)
kinova_j2n6s300 = Articulation(cfg=kinova_arm_cfg)
# Origin 5 with Sawyer
prim_utils.create_prim("/World/Origin5", "Xform", translation=origins[4])
# -- Table
cfg = sim_utils.UsdFileCfg(usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/SeattleLabTable/table_instanceable.usd")
cfg.func("/World/Origin5/Table", cfg, translation=(0.55, 0.0, 1.05))
# -- Robot
kinova_arm_cfg = KINOVA_GEN3_N7_CFG.replace(prim_path="/World/Origin5/Robot")
kinova_arm_cfg.init_state.pos = (0.0, 0.0, 1.05)
kinova_gen3n7 = Articulation(cfg=kinova_arm_cfg)
# Origin 6 with Kinova Gen3 (7-Dof) arm
prim_utils.create_prim("/World/Origin6", "Xform", translation=origins[5])
# -- Table
cfg = sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/Stand/stand_instanceable.usd", scale=(2.0, 2.0, 2.0)
)
cfg.func("/World/Origin6/Table", cfg, translation=(0.0, 0.0, 1.03))
# -- Robot
sawyer_arm_cfg = SAWYER_CFG.replace(prim_path="/World/Origin6/Robot")
sawyer_arm_cfg.init_state.pos = (0.0, 0.0, 1.03)
sawyer = Articulation(cfg=sawyer_arm_cfg)
# return the scene information
scene_entities = {
"franka_panda": franka_panda,
"ur10": ur10,
"kinova_j2n7s300": kinova_j2n7s300,
"kinova_j2n6s300": kinova_j2n6s300,
"kinova_gen3n7": kinova_gen3n7,
"sawyer": sawyer,
}
return scene_entities, origins
def run_simulator(sim: sim_utils.SimulationContext, entities: dict[str, Articulation], origins: torch.Tensor):
"""Runs the simulation loop."""
# Define simulation stepping
sim_dt = sim.get_physics_dt()
sim_time = 0.0
count = 0
# Simulate physics
while simulation_app.is_running():
# reset
if count % 200 == 0:
# reset counters
sim_time = 0.0
count = 0
# reset the scene entities
for index, robot in enumerate(entities.values()):
# root state
root_state = robot.data.default_root_state.clone()
root_state[:, :3] += origins[index]
robot.write_root_state_to_sim(root_state)
# set joint positions
joint_pos, joint_vel = robot.data.default_joint_pos.clone(), robot.data.default_joint_vel.clone()
robot.write_joint_state_to_sim(joint_pos, joint_vel)
# clear internal buffers
robot.reset()
print("[INFO]: Resetting robots state...")
# apply random actions to the robots
for robot in entities.values():
# generate random joint positions
joint_pos_target = robot.data.default_joint_pos + torch.randn_like(robot.data.joint_pos) * 0.1
joint_pos_target = joint_pos_target.clamp_(
robot.data.soft_joint_pos_limits[..., 0], robot.data.soft_joint_pos_limits[..., 1]
)
# apply action to the robot
robot.set_joint_position_target(joint_pos_target)
# write data to sim
robot.write_data_to_sim()
# perform step
sim.step()
# update sim-time
sim_time += sim_dt
count += 1
# update buffers
for robot in entities.values():
robot.update(sim_dt)
def main():
"""Main function."""
# Initialize the simulation context
sim_cfg = sim_utils.SimulationCfg()
sim = sim_utils.SimulationContext(sim_cfg)
# Set main camera
sim.set_camera_view([3.5, 0.0, 3.2], [0.0, 0.0, 0.5])
# design scene
scene_entities, scene_origins = design_scene()
scene_origins = torch.tensor(scene_origins, device=sim.device)
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene_entities, scene_origins)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 8,481 | Python | 34.940678 | 115 | 0.643438 |
NVIDIA-Omniverse/orbit/source/standalone/demos/bipeds.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates how to simulate a bipedal robot.
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="This script demonstrates how to simulate a bipedal robot.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import Articulation
from omni.isaac.orbit.sim import SimulationContext
##
# Pre-defined configs
##
from omni.isaac.orbit_assets.cassie import CASSIE_CFG # isort:skip
def main():
"""Main function."""
# Load kit helper
sim = SimulationContext(
sim_utils.SimulationCfg(device="cpu", use_gpu_pipeline=False, dt=0.005, physx=sim_utils.PhysxCfg(use_gpu=False))
)
# Set main camera
sim.set_camera_view(eye=[3.5, 3.5, 3.5], target=[0.0, 0.0, 0.0])
# Spawn things into stage
# Ground-plane
cfg = sim_utils.GroundPlaneCfg()
cfg.func("/World/defaultGroundPlane", cfg)
# Lights
cfg = sim_utils.DistantLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
cfg.func("/World/Light", cfg)
# Robots
robot_cfg = CASSIE_CFG
robot_cfg.spawn.func("/World/Cassie/Robot_1", robot_cfg.spawn, translation=(1.5, 0.5, 0.42))
# create handles for the robots
robots = Articulation(robot_cfg.replace(prim_path="/World/Cassie/Robot.*"))
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Define simulation stepping
sim_dt = sim.get_physics_dt()
sim_time = 0.0
count = 0
# Simulate physics
while simulation_app.is_running():
# reset
if count % 200 == 0:
# reset counters
sim_time = 0.0
count = 0
# reset dof state
joint_pos, joint_vel = robots.data.default_joint_pos, robots.data.default_joint_vel
robots.write_joint_state_to_sim(joint_pos, joint_vel)
robots.write_root_pose_to_sim(robots.data.default_root_state[:, :7])
robots.write_root_velocity_to_sim(robots.data.default_root_state[:, 7:])
robots.reset()
# reset command
print(">>>>>>>> Reset!")
# apply action to the robot
robots.set_joint_position_target(robots.data.default_joint_pos.clone())
robots.write_data_to_sim()
# perform step
sim.step()
# update sim-time
sim_time += sim_dt
count += 1
# update buffers
robots.update(sim_dt)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 3,056 | Python | 26.790909 | 120 | 0.643652 |
NVIDIA-Omniverse/orbit/source/standalone/demos/procedural_terrain.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
This script demonstrates procedural terrains with flat patches.
Example usage:
.. code-block:: bash
# Generate terrain with height color scheme
./orbit.sh -p source/standalone/demos/procedural_terrain.py --color_scheme height
# Generate terrain with random color scheme
./orbit.sh -p source/standalone/demos/procedural_terrain.py --color_scheme random
# Generate terrain with no color scheme
./orbit.sh -p source/standalone/demos/procedural_terrain.py --color_scheme none
# Generate terrain with curriculum
./orbit.sh -p source/standalone/demos/procedural_terrain.py --use_curriculum
# Generate terrain with curriculum along with flat patches
./orbit.sh -p source/standalone/demos/procedural_terrain.py --use_curriculum --show_flat_patches
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="This script demonstrates procedural terrain generation.")
parser.add_argument(
"--color_scheme",
type=str,
default="none",
choices=["height", "random", "none"],
help="Color scheme to use for the terrain generation.",
)
parser.add_argument(
"--use_curriculum",
action="store_true",
default=False,
help="Whether to use the curriculum for the terrain generation.",
)
parser.add_argument(
"--show_flat_patches",
action="store_true",
default=False,
help="Whether to show the flat patches computed during the terrain generation.",
)
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import random
import torch
import omni.isaac.orbit.sim as sim_utils
from omni.isaac.orbit.assets import AssetBase
from omni.isaac.orbit.markers import VisualizationMarkers, VisualizationMarkersCfg
from omni.isaac.orbit.terrains import FlatPatchSamplingCfg, TerrainImporter, TerrainImporterCfg
##
# Pre-defined configs
##
from omni.isaac.orbit.terrains.config.rough import ROUGH_TERRAINS_CFG # isort:skip
def design_scene() -> tuple[dict, torch.Tensor]:
"""Designs the scene."""
# Lights
cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.75, 0.75, 0.75))
cfg.func("/World/Light", cfg)
# Parse terrain generation
terrain_gen_cfg = ROUGH_TERRAINS_CFG.replace(curriculum=args_cli.use_curriculum, color_scheme=args_cli.color_scheme)
# Add flat patch configuration
# Note: To have separate colors for each sub-terrain type, we set the flat patch sampling configuration name
# to the sub-terrain name. However, this is not how it should be used in practice. The key name should be
# the intention of the flat patch. For instance, "source" or "target" for spawn and command related flat patches.
if args_cli.show_flat_patches:
for sub_terrain_name, sub_terrain_cfg in terrain_gen_cfg.sub_terrains.items():
sub_terrain_cfg.flat_patch_sampling = {
sub_terrain_name: FlatPatchSamplingCfg(num_patches=10, patch_radius=0.5, max_height_diff=0.05)
}
# Handler for terrains importing
terrain_importer_cfg = TerrainImporterCfg(
num_envs=2048,
env_spacing=3.0,
prim_path="/World/ground",
max_init_terrain_level=None,
terrain_type="generator",
terrain_generator=terrain_gen_cfg,
debug_vis=True,
)
# Remove visual material for height and random color schemes to use the default material
if args_cli.color_scheme in ["height", "random"]:
terrain_importer_cfg.visual_material = None
# Create terrain importer
terrain_importer = TerrainImporter(terrain_importer_cfg)
# Show the flat patches computed
if args_cli.show_flat_patches:
# Configure the flat patches
vis_cfg = VisualizationMarkersCfg(prim_path="/Visuals/TerrainFlatPatches", markers={})
for name in terrain_importer.flat_patches:
vis_cfg.markers[name] = sim_utils.CylinderCfg(
radius=0.5, # note: manually set to the patch radius for visualization
height=0.1,
visual_material=sim_utils.GlassMdlCfg(glass_color=(random.random(), random.random(), random.random())),
)
flat_patches_visualizer = VisualizationMarkers(vis_cfg)
# Visualize the flat patches
all_patch_locations = []
all_patch_indices = []
for i, patch_locations in enumerate(terrain_importer.flat_patches.values()):
num_patch_locations = patch_locations.view(-1, 3).shape[0]
# store the patch locations and indices
all_patch_locations.append(patch_locations.view(-1, 3))
all_patch_indices += [i] * num_patch_locations
# combine the patch locations and indices
flat_patches_visualizer.visualize(torch.cat(all_patch_locations), marker_indices=all_patch_indices)
# return the scene information
scene_entities = {"terrain": terrain_importer}
return scene_entities, terrain_importer.env_origins
def run_simulator(sim: sim_utils.SimulationContext, entities: dict[str, AssetBase], origins: torch.Tensor):
"""Runs the simulation loop."""
# Simulate physics
while simulation_app.is_running():
# perform step
sim.step()
def main():
"""Main function."""
# Initialize the simulation context
sim = sim_utils.SimulationContext(sim_utils.SimulationCfg(dt=0.01, substeps=1))
# Set main camera
sim.set_camera_view(eye=[2.5, 2.5, 2.5], target=[0.0, 0.0, 0.0])
# design scene
scene_entities, scene_origins = design_scene()
# Play the simulator
sim.reset()
# Now we are ready!
print("[INFO]: Setup complete...")
# Run the simulator
run_simulator(sim, scene_entities, scene_origins)
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 6,249 | Python | 34.112359 | 120 | 0.692591 |
NVIDIA-Omniverse/orbit/source/standalone/environments/random_agent.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to an environment with random action agent."""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Random agent for Orbit environments.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import torch
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import parse_env_cfg
def main():
"""Random actions agent with Orbit environment."""
# create environment configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
# create environment
env = gym.make(args_cli.task, cfg=env_cfg)
# print info (this is vectorized environment)
print(f"[INFO]: Gym observation space: {env.observation_space}")
print(f"[INFO]: Gym action space: {env.action_space}")
# reset environment
env.reset()
# simulate environment
while simulation_app.is_running():
# run everything in inference mode
with torch.inference_mode():
# sample actions from -1 to 1
actions = 2 * torch.rand(env.action_space.shape, device=env.unwrapped.device) - 1
# apply actions
env.step(actions)
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 2,276 | Python | 29.36 | 115 | 0.695079 |
NVIDIA-Omniverse/orbit/source/standalone/environments/list_envs.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Script to print all the available environments in ORBIT.
The script iterates over all registered environments and stores the details in a table.
It prints the name of the environment, the entry point and the config file.
All the environments are registered in the `omni.isaac.orbit_tasks` extension. They start
with `Isaac` in their name.
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher
# launch omniverse app
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
from prettytable import PrettyTable
import omni.isaac.orbit_tasks # noqa: F401
def main():
"""Print all environments registered in `omni.isaac.orbit_tasks` extension."""
# print all the available environments
table = PrettyTable(["S. No.", "Task Name", "Entry Point", "Config"])
table.title = "Available Environments in ORBIT"
# set alignment of table columns
table.align["Task Name"] = "l"
table.align["Entry Point"] = "l"
table.align["Config"] = "l"
# count of environments
index = 0
# acquire all Isaac environments names
for task_spec in gym.registry.values():
if "Isaac" in task_spec.id:
# add details to table
table.add_row([index + 1, task_spec.id, task_spec.entry_point, task_spec.kwargs["env_cfg_entry_point"]])
# increment count
index += 1
print(table)
if __name__ == "__main__":
try:
# run the main function
main()
except Exception as e:
raise e
finally:
# close the app
simulation_app.close()
| 1,827 | Python | 25.882353 | 116 | 0.67214 |
NVIDIA-Omniverse/orbit/source/standalone/environments/teleoperation/teleop_se3_agent.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to run a keyboard teleoperation with Orbit manipulation environments."""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Keyboard teleoperation for Orbit environments.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=1, help="Number of environments to simulate.")
parser.add_argument("--device", type=str, default="keyboard", help="Device for interacting with environment")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--sensitivity", type=float, default=1.0, help="Sensitivity factor.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(headless=args_cli.headless)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import torch
import carb
from omni.isaac.orbit.devices import Se3Gamepad, Se3Keyboard, Se3SpaceMouse
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import parse_env_cfg
def pre_process_actions(delta_pose: torch.Tensor, gripper_command: bool) -> torch.Tensor:
"""Pre-process actions for the environment."""
# compute actions based on environment
if "Reach" in args_cli.task:
# note: reach is the only one that uses a different action space
# compute actions
return delta_pose
else:
# resolve gripper command
gripper_vel = torch.zeros(delta_pose.shape[0], 1, device=delta_pose.device)
gripper_vel[:] = -1.0 if gripper_command else 1.0
# compute actions
return torch.concat([delta_pose, gripper_vel], dim=1)
def main():
"""Running keyboard teleoperation with Orbit manipulation environment."""
# parse configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
# modify configuration
env_cfg.terminations.time_out = None
# create environment
env = gym.make(args_cli.task, cfg=env_cfg)
# check environment name (for reach , we don't allow the gripper)
if "Reach" in args_cli.task:
carb.log_warn(
f"The environment '{args_cli.task}' does not support gripper control. The device command will be ignored."
)
# create controller
if args_cli.device.lower() == "keyboard":
teleop_interface = Se3Keyboard(
pos_sensitivity=0.005 * args_cli.sensitivity, rot_sensitivity=0.005 * args_cli.sensitivity
)
elif args_cli.device.lower() == "spacemouse":
teleop_interface = Se3SpaceMouse(
pos_sensitivity=0.05 * args_cli.sensitivity, rot_sensitivity=0.005 * args_cli.sensitivity
)
elif args_cli.device.lower() == "gamepad":
teleop_interface = Se3Gamepad(
pos_sensitivity=0.1 * args_cli.sensitivity, rot_sensitivity=0.1 * args_cli.sensitivity
)
else:
raise ValueError(f"Invalid device interface '{args_cli.device}'. Supported: 'keyboard', 'spacemouse'.")
# add teleoperation key for env reset
teleop_interface.add_callback("L", env.reset)
# print helper for keyboard
print(teleop_interface)
# reset environment
env.reset()
teleop_interface.reset()
# simulate environment
while simulation_app.is_running():
# run everything in inference mode
with torch.inference_mode():
# get keyboard command
delta_pose, gripper_command = teleop_interface.advance()
delta_pose = delta_pose.astype("float32")
# convert to torch
delta_pose = torch.tensor(delta_pose, device=env.unwrapped.device).repeat(env.unwrapped.num_envs, 1)
# pre-process actions
actions = pre_process_actions(delta_pose, gripper_command)
# apply actions
env.step(actions)
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 4,590 | Python | 34.589147 | 118 | 0.682135 |
NVIDIA-Omniverse/orbit/source/standalone/environments/state_machine/lift_cube_sm.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Script to run an environment with a pick and lift state machine.
The state machine is implemented in the kernel function `infer_state_machine`.
It uses the `warp` library to run the state machine in parallel on the GPU.
.. code-block:: bash
./orbit.sh -p source/standalone/environments/state_machine/lift_cube_sm.py --num_envs 32
"""
"""Launch Omniverse Toolkit first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Pick and lift state machine for lift environments.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(headless=args_cli.headless)
simulation_app = app_launcher.app
"""Rest everything else."""
import gymnasium as gym
import torch
from collections.abc import Sequence
import warp as wp
from omni.isaac.orbit.assets.rigid_object.rigid_object_data import RigidObjectData
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.manipulation.lift.lift_env_cfg import LiftEnvCfg
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
# initialize warp
wp.init()
class GripperState:
"""States for the gripper."""
OPEN = wp.constant(1.0)
CLOSE = wp.constant(-1.0)
class PickSmState:
"""States for the pick state machine."""
REST = wp.constant(0)
APPROACH_ABOVE_OBJECT = wp.constant(1)
APPROACH_OBJECT = wp.constant(2)
GRASP_OBJECT = wp.constant(3)
LIFT_OBJECT = wp.constant(4)
class PickSmWaitTime:
"""Additional wait times (in s) for states for before switching."""
REST = wp.constant(0.2)
APPROACH_ABOVE_OBJECT = wp.constant(0.5)
APPROACH_OBJECT = wp.constant(0.6)
GRASP_OBJECT = wp.constant(0.3)
LIFT_OBJECT = wp.constant(1.0)
@wp.kernel
def infer_state_machine(
dt: wp.array(dtype=float),
sm_state: wp.array(dtype=int),
sm_wait_time: wp.array(dtype=float),
ee_pose: wp.array(dtype=wp.transform),
object_pose: wp.array(dtype=wp.transform),
des_object_pose: wp.array(dtype=wp.transform),
des_ee_pose: wp.array(dtype=wp.transform),
gripper_state: wp.array(dtype=float),
offset: wp.array(dtype=wp.transform),
):
# retrieve thread id
tid = wp.tid()
# retrieve state machine state
state = sm_state[tid]
# decide next state
if state == PickSmState.REST:
des_ee_pose[tid] = ee_pose[tid]
gripper_state[tid] = GripperState.OPEN
# wait for a while
if sm_wait_time[tid] >= PickSmWaitTime.REST:
# move to next state and reset wait time
sm_state[tid] = PickSmState.APPROACH_ABOVE_OBJECT
sm_wait_time[tid] = 0.0
elif state == PickSmState.APPROACH_ABOVE_OBJECT:
des_ee_pose[tid] = wp.transform_multiply(offset[tid], object_pose[tid])
gripper_state[tid] = GripperState.OPEN
# TODO: error between current and desired ee pose below threshold
# wait for a while
if sm_wait_time[tid] >= PickSmWaitTime.APPROACH_OBJECT:
# move to next state and reset wait time
sm_state[tid] = PickSmState.APPROACH_OBJECT
sm_wait_time[tid] = 0.0
elif state == PickSmState.APPROACH_OBJECT:
des_ee_pose[tid] = object_pose[tid]
gripper_state[tid] = GripperState.OPEN
# TODO: error between current and desired ee pose below threshold
# wait for a while
if sm_wait_time[tid] >= PickSmWaitTime.APPROACH_OBJECT:
# move to next state and reset wait time
sm_state[tid] = PickSmState.GRASP_OBJECT
sm_wait_time[tid] = 0.0
elif state == PickSmState.GRASP_OBJECT:
des_ee_pose[tid] = object_pose[tid]
gripper_state[tid] = GripperState.CLOSE
# wait for a while
if sm_wait_time[tid] >= PickSmWaitTime.GRASP_OBJECT:
# move to next state and reset wait time
sm_state[tid] = PickSmState.LIFT_OBJECT
sm_wait_time[tid] = 0.0
elif state == PickSmState.LIFT_OBJECT:
des_ee_pose[tid] = des_object_pose[tid]
gripper_state[tid] = GripperState.CLOSE
# TODO: error between current and desired ee pose below threshold
# wait for a while
if sm_wait_time[tid] >= PickSmWaitTime.LIFT_OBJECT:
# move to next state and reset wait time
sm_state[tid] = PickSmState.LIFT_OBJECT
sm_wait_time[tid] = 0.0
# increment wait time
sm_wait_time[tid] = sm_wait_time[tid] + dt[tid]
class PickAndLiftSm:
"""A simple state machine in a robot's task space to pick and lift an object.
The state machine is implemented as a warp kernel. It takes in the current state of
the robot's end-effector and the object, and outputs the desired state of the robot's
end-effector and the gripper. The state machine is implemented as a finite state
machine with the following states:
1. REST: The robot is at rest.
2. APPROACH_ABOVE_OBJECT: The robot moves above the object.
3. APPROACH_OBJECT: The robot moves to the object.
4. GRASP_OBJECT: The robot grasps the object.
5. LIFT_OBJECT: The robot lifts the object to the desired pose. This is the final state.
"""
def __init__(self, dt: float, num_envs: int, device: torch.device | str = "cpu"):
"""Initialize the state machine.
Args:
dt: The environment time step.
num_envs: The number of environments to simulate.
device: The device to run the state machine on.
"""
# save parameters
self.dt = float(dt)
self.num_envs = num_envs
self.device = device
# initialize state machine
self.sm_dt = torch.full((self.num_envs,), self.dt, device=self.device)
self.sm_state = torch.full((self.num_envs,), 0, dtype=torch.int32, device=self.device)
self.sm_wait_time = torch.zeros((self.num_envs,), device=self.device)
# desired state
self.des_ee_pose = torch.zeros((self.num_envs, 7), device=self.device)
self.des_gripper_state = torch.full((self.num_envs,), 0.0, device=self.device)
# approach above object offset
self.offset = torch.zeros((self.num_envs, 7), device=self.device)
self.offset[:, 2] = 0.1
self.offset[:, -1] = 1.0 # warp expects quaternion as (x, y, z, w)
# convert to warp
self.sm_dt_wp = wp.from_torch(self.sm_dt, wp.float32)
self.sm_state_wp = wp.from_torch(self.sm_state, wp.int32)
self.sm_wait_time_wp = wp.from_torch(self.sm_wait_time, wp.float32)
self.des_ee_pose_wp = wp.from_torch(self.des_ee_pose, wp.transform)
self.des_gripper_state_wp = wp.from_torch(self.des_gripper_state, wp.float32)
self.offset_wp = wp.from_torch(self.offset, wp.transform)
def reset_idx(self, env_ids: Sequence[int] = None):
"""Reset the state machine."""
if env_ids is None:
env_ids = slice(None)
self.sm_state[env_ids] = 0
self.sm_wait_time[env_ids] = 0.0
def compute(self, ee_pose: torch.Tensor, object_pose: torch.Tensor, des_object_pose: torch.Tensor):
"""Compute the desired state of the robot's end-effector and the gripper."""
# convert all transformations from (w, x, y, z) to (x, y, z, w)
ee_pose = ee_pose[:, [0, 1, 2, 4, 5, 6, 3]]
object_pose = object_pose[:, [0, 1, 2, 4, 5, 6, 3]]
des_object_pose = des_object_pose[:, [0, 1, 2, 4, 5, 6, 3]]
# convert to warp
ee_pose_wp = wp.from_torch(ee_pose.contiguous(), wp.transform)
object_pose_wp = wp.from_torch(object_pose.contiguous(), wp.transform)
des_object_pose_wp = wp.from_torch(des_object_pose.contiguous(), wp.transform)
# run state machine
wp.launch(
kernel=infer_state_machine,
dim=self.num_envs,
inputs=[
self.sm_dt_wp,
self.sm_state_wp,
self.sm_wait_time_wp,
ee_pose_wp,
object_pose_wp,
des_object_pose_wp,
self.des_ee_pose_wp,
self.des_gripper_state_wp,
self.offset_wp,
],
device=self.device,
)
# convert transformations back to (w, x, y, z)
des_ee_pose = self.des_ee_pose[:, [0, 1, 2, 6, 3, 4, 5]]
# convert to torch
return torch.cat([des_ee_pose, self.des_gripper_state.unsqueeze(-1)], dim=-1)
def main():
# parse configuration
env_cfg: LiftEnvCfg = parse_env_cfg(
"Isaac-Lift-Cube-Franka-IK-Abs-v0",
use_gpu=not args_cli.cpu,
num_envs=args_cli.num_envs,
use_fabric=not args_cli.disable_fabric,
)
# create environment
env = gym.make("Isaac-Lift-Cube-Franka-IK-Abs-v0", cfg=env_cfg)
# reset environment at start
env.reset()
# create action buffers (position + quaternion)
actions = torch.zeros(env.unwrapped.action_space.shape, device=env.unwrapped.device)
actions[:, 3] = 1.0
# desired object orientation (we only do position control of object)
desired_orientation = torch.zeros((env.unwrapped.num_envs, 4), device=env.unwrapped.device)
desired_orientation[:, 1] = 1.0
# create state machine
pick_sm = PickAndLiftSm(env_cfg.sim.dt * env_cfg.decimation, env.unwrapped.num_envs, env.unwrapped.device)
while simulation_app.is_running():
# run everything in inference mode
with torch.inference_mode():
# step environment
dones = env.step(actions)[-2]
# observations
# -- end-effector frame
ee_frame_sensor = env.unwrapped.scene["ee_frame"]
tcp_rest_position = ee_frame_sensor.data.target_pos_w[..., 0, :].clone() - env.unwrapped.scene.env_origins
tcp_rest_orientation = ee_frame_sensor.data.target_quat_w[..., 0, :].clone()
# -- object frame
object_data: RigidObjectData = env.unwrapped.scene["object"].data
object_position = object_data.root_pos_w - env.unwrapped.scene.env_origins
# -- target object frame
desired_position = env.unwrapped.command_manager.get_command("object_pose")[..., :3]
# advance state machine
actions = pick_sm.compute(
torch.cat([tcp_rest_position, tcp_rest_orientation], dim=-1),
torch.cat([object_position, desired_orientation], dim=-1),
torch.cat([desired_position, desired_orientation], dim=-1),
)
# reset state machine
if dones.any():
pick_sm.reset_idx(dones.nonzero(as_tuple=False).squeeze(-1))
# close the environment
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 11,404 | Python | 37.016667 | 118 | 0.632673 |
NVIDIA-Omniverse/orbit/source/standalone/environments/state_machine/open_cabinet_sm.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Script to run an environment with a cabinet opening state machine.
The state machine is implemented in the kernel function `infer_state_machine`.
It uses the `warp` library to run the state machine in parallel on the GPU.
.. code-block:: bash
./orbit.sh -p source/standalone/environments/state_machine/lift_cube_sm.py --num_envs 32
"""
"""Launch Omniverse Toolkit first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Pick and lift state machine for cabinet environments.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(headless=args_cli.headless)
simulation_app = app_launcher.app
"""Rest everything else."""
import gymnasium as gym
import torch
import traceback
from collections.abc import Sequence
import carb
import warp as wp
from omni.isaac.orbit.sensors import FrameTransformer
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.manipulation.cabinet.cabinet_env_cfg import CabinetEnvCfg
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
# initialize warp
wp.init()
class GripperState:
"""States for the gripper."""
OPEN = wp.constant(1.0)
CLOSE = wp.constant(-1.0)
class OpenDrawerSmState:
"""States for the cabinet drawer opening state machine."""
REST = wp.constant(0)
APPROACH_INFRONT_HANDLE = wp.constant(1)
APPROACH_HANDLE = wp.constant(2)
GRASP_HANDLE = wp.constant(3)
OPEN_DRAWER = wp.constant(4)
RELEASE_HANDLE = wp.constant(5)
class OpenDrawerSmWaitTime:
"""Additional wait times (in s) for states for before switching."""
REST = wp.constant(0.5)
APPROACH_INFRONT_HANDLE = wp.constant(1.25)
APPROACH_HANDLE = wp.constant(1.0)
GRASP_HANDLE = wp.constant(1.0)
OPEN_DRAWER = wp.constant(3.0)
RELEASE_HANDLE = wp.constant(0.2)
@wp.kernel
def infer_state_machine(
dt: wp.array(dtype=float),
sm_state: wp.array(dtype=int),
sm_wait_time: wp.array(dtype=float),
ee_pose: wp.array(dtype=wp.transform),
handle_pose: wp.array(dtype=wp.transform),
des_ee_pose: wp.array(dtype=wp.transform),
gripper_state: wp.array(dtype=float),
handle_approach_offset: wp.array(dtype=wp.transform),
handle_grasp_offset: wp.array(dtype=wp.transform),
drawer_opening_rate: wp.array(dtype=wp.transform),
):
# retrieve thread id
tid = wp.tid()
# retrieve state machine state
state = sm_state[tid]
# decide next state
if state == OpenDrawerSmState.REST:
des_ee_pose[tid] = ee_pose[tid]
gripper_state[tid] = GripperState.OPEN
# wait for a while
if sm_wait_time[tid] >= OpenDrawerSmWaitTime.REST:
# move to next state and reset wait time
sm_state[tid] = OpenDrawerSmState.APPROACH_INFRONT_HANDLE
sm_wait_time[tid] = 0.0
elif state == OpenDrawerSmState.APPROACH_INFRONT_HANDLE:
des_ee_pose[tid] = wp.transform_multiply(handle_approach_offset[tid], handle_pose[tid])
gripper_state[tid] = GripperState.OPEN
# TODO: error between current and desired ee pose below threshold
# wait for a while
if sm_wait_time[tid] >= OpenDrawerSmWaitTime.APPROACH_INFRONT_HANDLE:
# move to next state and reset wait time
sm_state[tid] = OpenDrawerSmState.APPROACH_HANDLE
sm_wait_time[tid] = 0.0
elif state == OpenDrawerSmState.APPROACH_HANDLE:
des_ee_pose[tid] = handle_pose[tid]
gripper_state[tid] = GripperState.OPEN
# TODO: error between current and desired ee pose below threshold
# wait for a while
if sm_wait_time[tid] >= OpenDrawerSmWaitTime.APPROACH_HANDLE:
# move to next state and reset wait time
sm_state[tid] = OpenDrawerSmState.GRASP_HANDLE
sm_wait_time[tid] = 0.0
elif state == OpenDrawerSmState.GRASP_HANDLE:
des_ee_pose[tid] = wp.transform_multiply(handle_grasp_offset[tid], handle_pose[tid])
gripper_state[tid] = GripperState.CLOSE
# wait for a while
if sm_wait_time[tid] >= OpenDrawerSmWaitTime.GRASP_HANDLE:
# move to next state and reset wait time
sm_state[tid] = OpenDrawerSmState.OPEN_DRAWER
sm_wait_time[tid] = 0.0
elif state == OpenDrawerSmState.OPEN_DRAWER:
des_ee_pose[tid] = wp.transform_multiply(drawer_opening_rate[tid], handle_pose[tid])
gripper_state[tid] = GripperState.CLOSE
# wait for a while
if sm_wait_time[tid] >= OpenDrawerSmWaitTime.OPEN_DRAWER:
# move to next state and reset wait time
sm_state[tid] = OpenDrawerSmState.RELEASE_HANDLE
sm_wait_time[tid] = 0.0
elif state == OpenDrawerSmState.RELEASE_HANDLE:
des_ee_pose[tid] = ee_pose[tid]
gripper_state[tid] = GripperState.CLOSE
# wait for a while
if sm_wait_time[tid] >= OpenDrawerSmWaitTime.RELEASE_HANDLE:
# move to next state and reset wait time
sm_state[tid] = OpenDrawerSmState.RELEASE_HANDLE
sm_wait_time[tid] = 0.0
# increment wait time
sm_wait_time[tid] = sm_wait_time[tid] + dt[tid]
class OpenDrawerSm:
"""A simple state machine in a robot's task space to open a drawer in the cabinet.
The state machine is implemented as a warp kernel. It takes in the current state of
the robot's end-effector and the object, and outputs the desired state of the robot's
end-effector and the gripper. The state machine is implemented as a finite state
machine with the following states:
1. REST: The robot is at rest.
2. APPROACH_HANDLE: The robot moves towards the handle of the drawer.
3. GRASP_HANDLE: The robot grasps the handle of the drawer.
4. OPEN_DRAWER: The robot opens the drawer.
5. RELEASE_HANDLE: The robot releases the handle of the drawer. This is the final state.
"""
def __init__(self, dt: float, num_envs: int, device: torch.device | str = "cpu"):
"""Initialize the state machine.
Args:
dt: The environment time step.
num_envs: The number of environments to simulate.
device: The device to run the state machine on.
"""
# save parameters
self.dt = float(dt)
self.num_envs = num_envs
self.device = device
# initialize state machine
self.sm_dt = torch.full((self.num_envs,), self.dt, device=self.device)
self.sm_state = torch.full((self.num_envs,), 0, dtype=torch.int32, device=self.device)
self.sm_wait_time = torch.zeros((self.num_envs,), device=self.device)
# desired state
self.des_ee_pose = torch.zeros((self.num_envs, 7), device=self.device)
self.des_gripper_state = torch.full((self.num_envs,), 0.0, device=self.device)
# approach infront of the handle
self.handle_approach_offset = torch.zeros((self.num_envs, 7), device=self.device)
self.handle_approach_offset[:, 0] = -0.1
self.handle_approach_offset[:, -1] = 1.0 # warp expects quaternion as (x, y, z, w)
# handle grasp offset
self.handle_grasp_offset = torch.zeros((self.num_envs, 7), device=self.device)
self.handle_grasp_offset[:, 0] = 0.025
self.handle_grasp_offset[:, -1] = 1.0 # warp expects quaternion as (x, y, z, w)
# drawer opening rate
self.drawer_opening_rate = torch.zeros((self.num_envs, 7), device=self.device)
self.drawer_opening_rate[:, 0] = -0.015
self.drawer_opening_rate[:, -1] = 1.0 # warp expects quaternion as (x, y, z, w)
# convert to warp
self.sm_dt_wp = wp.from_torch(self.sm_dt, wp.float32)
self.sm_state_wp = wp.from_torch(self.sm_state, wp.int32)
self.sm_wait_time_wp = wp.from_torch(self.sm_wait_time, wp.float32)
self.des_ee_pose_wp = wp.from_torch(self.des_ee_pose, wp.transform)
self.des_gripper_state_wp = wp.from_torch(self.des_gripper_state, wp.float32)
self.handle_approach_offset_wp = wp.from_torch(self.handle_approach_offset, wp.transform)
self.handle_grasp_offset_wp = wp.from_torch(self.handle_grasp_offset, wp.transform)
self.drawer_opening_rate_wp = wp.from_torch(self.drawer_opening_rate, wp.transform)
def reset_idx(self, env_ids: Sequence[int] | None = None):
"""Reset the state machine."""
if env_ids is None:
env_ids = slice(None)
# reset state machine
self.sm_state[env_ids] = 0
self.sm_wait_time[env_ids] = 0.0
def compute(self, ee_pose: torch.Tensor, handle_pose: torch.Tensor):
"""Compute the desired state of the robot's end-effector and the gripper."""
# convert all transformations from (w, x, y, z) to (x, y, z, w)
ee_pose = ee_pose[:, [0, 1, 2, 4, 5, 6, 3]]
handle_pose = handle_pose[:, [0, 1, 2, 4, 5, 6, 3]]
# convert to warp
ee_pose_wp = wp.from_torch(ee_pose.contiguous(), wp.transform)
handle_pose_wp = wp.from_torch(handle_pose.contiguous(), wp.transform)
# run state machine
wp.launch(
kernel=infer_state_machine,
dim=self.num_envs,
inputs=[
self.sm_dt_wp,
self.sm_state_wp,
self.sm_wait_time_wp,
ee_pose_wp,
handle_pose_wp,
self.des_ee_pose_wp,
self.des_gripper_state_wp,
self.handle_approach_offset_wp,
self.handle_grasp_offset_wp,
self.drawer_opening_rate_wp,
],
device=self.device,
)
# convert transformations back to (w, x, y, z)
des_ee_pose = self.des_ee_pose[:, [0, 1, 2, 6, 3, 4, 5]]
# convert to torch
return torch.cat([des_ee_pose, self.des_gripper_state.unsqueeze(-1)], dim=-1)
def main():
# parse configuration
env_cfg: CabinetEnvCfg = parse_env_cfg(
"Isaac-Open-Drawer-Franka-IK-Abs-v0",
use_gpu=not args_cli.cpu,
num_envs=args_cli.num_envs,
use_fabric=not args_cli.disable_fabric,
)
# create environment
env = gym.make("Isaac-Open-Drawer-Franka-IK-Abs-v0", cfg=env_cfg)
# reset environment at start
env.reset()
# create action buffers (position + quaternion)
actions = torch.zeros(env.unwrapped.action_space.shape, device=env.unwrapped.device)
actions[:, 3] = 1.0
# desired object orientation (we only do position control of object)
desired_orientation = torch.zeros((env.unwrapped.num_envs, 4), device=env.unwrapped.device)
desired_orientation[:, 1] = 1.0
# create state machine
open_sm = OpenDrawerSm(env_cfg.sim.dt * env_cfg.decimation, env.unwrapped.num_envs, env.unwrapped.device)
while simulation_app.is_running():
# run everything in inference mode
with torch.inference_mode():
# step environment
dones = env.step(actions)[-2]
# observations
# -- end-effector frame
ee_frame_tf: FrameTransformer = env.unwrapped.scene["ee_frame"]
tcp_rest_position = ee_frame_tf.data.target_pos_w[..., 0, :].clone() - env.unwrapped.scene.env_origins
tcp_rest_orientation = ee_frame_tf.data.target_quat_w[..., 0, :].clone()
# -- handle frame
cabinet_frame_tf: FrameTransformer = env.unwrapped.scene["cabinet_frame"]
cabinet_position = cabinet_frame_tf.data.target_pos_w[..., 0, :].clone() - env.unwrapped.scene.env_origins
cabinet_orientation = cabinet_frame_tf.data.target_quat_w[..., 0, :].clone()
# advance state machine
actions = open_sm.compute(
torch.cat([tcp_rest_position, tcp_rest_orientation], dim=-1),
torch.cat([cabinet_position, cabinet_orientation], dim=-1),
)
# reset state machine
if dones.any():
open_sm.reset_idx(dones.nonzero(as_tuple=False).squeeze(-1))
# close the environment
env.close()
if __name__ == "__main__":
try:
# run the main execution
main()
except Exception as err:
carb.log_error(err)
carb.log_error(traceback.format_exc())
raise
finally:
# close sim app
simulation_app.close()
| 12,935 | Python | 38.559633 | 118 | 0.639351 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/skrl/play.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Script to play a checkpoint of an RL agent from skrl.
Visit the skrl documentation (https://skrl.readthedocs.io) to see the examples structured in
a more user-friendly way.
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Play a checkpoint of an RL agent from skrl.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--checkpoint", type=str, default=None, help="Path to model checkpoint.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import os
import torch
from skrl.agents.torch.ppo import PPO, PPO_DEFAULT_CONFIG
from skrl.utils.model_instantiators.torch import deterministic_model, gaussian_model, shared_model
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import get_checkpoint_path, load_cfg_from_registry, parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.skrl import SkrlVecEnvWrapper, process_skrl_cfg
def main():
"""Play with skrl agent."""
# parse env configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
experiment_cfg = load_cfg_from_registry(args_cli.task, "skrl_cfg_entry_point")
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg)
# wrap around environment for skrl
env = SkrlVecEnvWrapper(env) # same as: `wrap_env(env, wrapper="isaac-orbit")`
# instantiate models using skrl model instantiator utility
# https://skrl.readthedocs.io/en/latest/modules/skrl.utils.model_instantiators.html
models = {}
# non-shared models
if experiment_cfg["models"]["separate"]:
models["policy"] = gaussian_model(
observation_space=env.observation_space,
action_space=env.action_space,
device=env.device,
**process_skrl_cfg(experiment_cfg["models"]["policy"]),
)
models["value"] = deterministic_model(
observation_space=env.observation_space,
action_space=env.action_space,
device=env.device,
**process_skrl_cfg(experiment_cfg["models"]["value"]),
)
# shared models
else:
models["policy"] = shared_model(
observation_space=env.observation_space,
action_space=env.action_space,
device=env.device,
structure=None,
roles=["policy", "value"],
parameters=[
process_skrl_cfg(experiment_cfg["models"]["policy"]),
process_skrl_cfg(experiment_cfg["models"]["value"]),
],
)
models["value"] = models["policy"]
# configure and instantiate PPO agent
# https://skrl.readthedocs.io/en/latest/modules/skrl.agents.ppo.html
agent_cfg = PPO_DEFAULT_CONFIG.copy()
experiment_cfg["agent"]["rewards_shaper"] = None # avoid 'dictionary changed size during iteration'
agent_cfg.update(process_skrl_cfg(experiment_cfg["agent"]))
agent_cfg["state_preprocessor_kwargs"].update({"size": env.observation_space, "device": env.device})
agent_cfg["value_preprocessor_kwargs"].update({"size": 1, "device": env.device})
agent_cfg["experiment"]["write_interval"] = 0 # don't log to Tensorboard
agent_cfg["experiment"]["checkpoint_interval"] = 0 # don't generate checkpoints
agent = PPO(
models=models,
memory=None, # memory is optional during evaluation
cfg=agent_cfg,
observation_space=env.observation_space,
action_space=env.action_space,
device=env.device,
)
# specify directory for logging experiments (load checkpoint)
log_root_path = os.path.join("logs", "skrl", experiment_cfg["agent"]["experiment"]["directory"])
log_root_path = os.path.abspath(log_root_path)
print(f"[INFO] Loading experiment from directory: {log_root_path}")
# get checkpoint path
if args_cli.checkpoint:
resume_path = os.path.abspath(args_cli.checkpoint)
else:
resume_path = get_checkpoint_path(log_root_path, other_dirs=["checkpoints"])
print(f"[INFO] Loading model checkpoint from: {resume_path}")
# initialize agent
agent.init()
agent.load(resume_path)
# set agent to evaluation mode
agent.set_running_mode("eval")
# reset environment
obs, _ = env.reset()
# simulate environment
while simulation_app.is_running():
# run everything in inference mode
with torch.inference_mode():
# agent stepping
actions = agent.act(obs, timestep=0, timesteps=0)[0]
# env stepping
obs, _, _, _, _ = env.step(actions)
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 5,657 | Python | 35.269231 | 115 | 0.668022 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/skrl/train.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""
Script to train RL agent with skrl.
Visit the skrl documentation (https://skrl.readthedocs.io) to see the examples structured in
a more user-friendly way.
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Train an RL agent with skrl.")
parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import os
from datetime import datetime
from skrl.agents.torch.ppo import PPO, PPO_DEFAULT_CONFIG
from skrl.memories.torch import RandomMemory
from skrl.utils import set_seed
from skrl.utils.model_instantiators.torch import deterministic_model, gaussian_model, shared_model
from omni.isaac.orbit.utils.dict import print_dict
from omni.isaac.orbit.utils.io import dump_pickle, dump_yaml
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import load_cfg_from_registry, parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.skrl import SkrlSequentialLogTrainer, SkrlVecEnvWrapper, process_skrl_cfg
def main():
"""Train with skrl agent."""
# read the seed from command line
args_cli_seed = args_cli.seed
# parse configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
experiment_cfg = load_cfg_from_registry(args_cli.task, "skrl_cfg_entry_point")
# specify directory for logging experiments
log_root_path = os.path.join("logs", "skrl", experiment_cfg["agent"]["experiment"]["directory"])
log_root_path = os.path.abspath(log_root_path)
print(f"[INFO] Logging experiment in directory: {log_root_path}")
# specify directory for logging runs: {time-stamp}_{run_name}
log_dir = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
if experiment_cfg["agent"]["experiment"]["experiment_name"]:
log_dir += f'_{experiment_cfg["agent"]["experiment"]["experiment_name"]}'
# set directory into agent config
experiment_cfg["agent"]["experiment"]["directory"] = log_root_path
experiment_cfg["agent"]["experiment"]["experiment_name"] = log_dir
# update log_dir
log_dir = os.path.join(log_root_path, log_dir)
# dump the configuration into log-directory
dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), experiment_cfg)
dump_pickle(os.path.join(log_dir, "params", "env.pkl"), env_cfg)
dump_pickle(os.path.join(log_dir, "params", "agent.pkl"), experiment_cfg)
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
# wrap for video recording
if args_cli.video:
video_kwargs = {
"video_folder": os.path.join(log_dir, "videos"),
"step_trigger": lambda step: step % args_cli.video_interval == 0,
"video_length": args_cli.video_length,
"disable_logger": True,
}
print("[INFO] Recording videos during training.")
print_dict(video_kwargs, nesting=4)
env = gym.wrappers.RecordVideo(env, **video_kwargs)
# wrap around environment for skrl
env = SkrlVecEnvWrapper(env) # same as: `wrap_env(env, wrapper="isaac-orbit")`
# set seed for the experiment (override from command line)
set_seed(args_cli_seed if args_cli_seed is not None else experiment_cfg["seed"])
# instantiate models using skrl model instantiator utility
# https://skrl.readthedocs.io/en/latest/modules/skrl.utils.model_instantiators.html
models = {}
# non-shared models
if experiment_cfg["models"]["separate"]:
models["policy"] = gaussian_model(
observation_space=env.observation_space,
action_space=env.action_space,
device=env.device,
**process_skrl_cfg(experiment_cfg["models"]["policy"]),
)
models["value"] = deterministic_model(
observation_space=env.observation_space,
action_space=env.action_space,
device=env.device,
**process_skrl_cfg(experiment_cfg["models"]["value"]),
)
# shared models
else:
models["policy"] = shared_model(
observation_space=env.observation_space,
action_space=env.action_space,
device=env.device,
structure=None,
roles=["policy", "value"],
parameters=[
process_skrl_cfg(experiment_cfg["models"]["policy"]),
process_skrl_cfg(experiment_cfg["models"]["value"]),
],
)
models["value"] = models["policy"]
# instantiate a RandomMemory as rollout buffer (any memory can be used for this)
# https://skrl.readthedocs.io/en/latest/modules/skrl.memories.random.html
memory_size = experiment_cfg["agent"]["rollouts"] # memory_size is the agent's number of rollouts
memory = RandomMemory(memory_size=memory_size, num_envs=env.num_envs, device=env.device)
# configure and instantiate PPO agent
# https://skrl.readthedocs.io/en/latest/modules/skrl.agents.ppo.html
agent_cfg = PPO_DEFAULT_CONFIG.copy()
experiment_cfg["agent"]["rewards_shaper"] = None # avoid 'dictionary changed size during iteration'
agent_cfg.update(process_skrl_cfg(experiment_cfg["agent"]))
agent_cfg["state_preprocessor_kwargs"].update({"size": env.observation_space, "device": env.device})
agent_cfg["value_preprocessor_kwargs"].update({"size": 1, "device": env.device})
agent = PPO(
models=models,
memory=memory,
cfg=agent_cfg,
observation_space=env.observation_space,
action_space=env.action_space,
device=env.device,
)
# configure and instantiate a custom RL trainer for logging episode events
# https://skrl.readthedocs.io/en/latest/modules/skrl.trainers.base_class.html
trainer_cfg = experiment_cfg["trainer"]
trainer = SkrlSequentialLogTrainer(cfg=trainer_cfg, env=env, agents=agent)
# train the agent
trainer.train()
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 7,471 | Python | 39.608695 | 117 | 0.680498 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/robomimic/play.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to run a trained policy from robomimic."""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Play policy trained using robomimic for Orbit environments.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--checkpoint", type=str, default=None, help="Pytorch model checkpoint to load.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import torch
import robomimic # noqa: F401
import robomimic.utils.file_utils as FileUtils
import robomimic.utils.torch_utils as TorchUtils
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import parse_env_cfg
def main():
"""Run a trained policy from robomimic with Orbit environment."""
# parse configuration
env_cfg = parse_env_cfg(args_cli.task, use_gpu=not args_cli.cpu, num_envs=1, use_fabric=not args_cli.disable_fabric)
# we want to have the terms in the observations returned as a dictionary
# rather than a concatenated tensor
env_cfg.observations.policy.concatenate_terms = False
# create environment
env = gym.make(args_cli.task, cfg=env_cfg)
# acquire device
device = TorchUtils.get_torch_device(try_to_use_cuda=True)
# restore policy
policy, _ = FileUtils.policy_from_checkpoint(ckpt_path=args_cli.checkpoint, device=device, verbose=True)
# reset environment
obs_dict, _ = env.reset()
# robomimic only cares about policy observations
obs = obs_dict["policy"]
# simulate environment
while simulation_app.is_running():
# run everything in inference mode
with torch.inference_mode():
# compute actions
actions = policy(obs)
actions = torch.from_numpy(actions).to(device=device).view(1, env.action_space.shape[1])
# apply actions
obs_dict = env.step(actions)[0]
# robomimic only cares about policy observations
obs = obs_dict["policy"]
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 2,845 | Python | 31.340909 | 120 | 0.702988 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/robomimic/collect_demonstrations.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to collect demonstrations with Orbit environments."""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Collect demonstrations for Orbit environments.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument("--num_envs", type=int, default=1, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--device", type=str, default="keyboard", help="Device for interacting with environment")
parser.add_argument("--num_demos", type=int, default=1, help="Number of episodes to store in the dataset.")
parser.add_argument("--filename", type=str, default="hdf_dataset", help="Basename of output file.")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch the simulator
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import contextlib
import gymnasium as gym
import os
import torch
from omni.isaac.orbit.devices import Se3Keyboard, Se3SpaceMouse
from omni.isaac.orbit.managers import TerminationTermCfg as DoneTerm
from omni.isaac.orbit.utils.io import dump_pickle, dump_yaml
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.manipulation.lift import mdp
from omni.isaac.orbit_tasks.utils.data_collector import RobomimicDataCollector
from omni.isaac.orbit_tasks.utils.parse_cfg import parse_env_cfg
def pre_process_actions(delta_pose: torch.Tensor, gripper_command: bool) -> torch.Tensor:
"""Pre-process actions for the environment."""
# compute actions based on environment
if "Reach" in args_cli.task:
# note: reach is the only one that uses a different action space
# compute actions
return delta_pose
else:
# resolve gripper command
gripper_vel = torch.zeros((delta_pose.shape[0], 1), dtype=torch.float, device=delta_pose.device)
gripper_vel[:] = -1 if gripper_command else 1
# compute actions
return torch.concat([delta_pose, gripper_vel], dim=1)
def main():
"""Collect demonstrations from the environment using teleop interfaces."""
assert (
args_cli.task == "Isaac-Lift-Cube-Franka-IK-Rel-v0"
), "Only 'Isaac-Lift-Cube-Franka-IK-Rel-v0' is supported currently."
# parse configuration
env_cfg = parse_env_cfg(args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs)
# modify configuration such that the environment runs indefinitely
# until goal is reached
env_cfg.terminations.time_out = None
# set the resampling time range to large number to avoid resampling
env_cfg.commands.object_pose.resampling_time_range = (1.0e9, 1.0e9)
# we want to have the terms in the observations returned as a dictionary
# rather than a concatenated tensor
env_cfg.observations.policy.concatenate_terms = False
# add termination condition for reaching the goal otherwise the environment won't reset
env_cfg.terminations.object_reached_goal = DoneTerm(func=mdp.object_reached_goal)
# create environment
env = gym.make(args_cli.task, cfg=env_cfg)
# create controller
if args_cli.device.lower() == "keyboard":
teleop_interface = Se3Keyboard(pos_sensitivity=0.04, rot_sensitivity=0.08)
elif args_cli.device.lower() == "spacemouse":
teleop_interface = Se3SpaceMouse(pos_sensitivity=0.05, rot_sensitivity=0.005)
else:
raise ValueError(f"Invalid device interface '{args_cli.device}'. Supported: 'keyboard', 'spacemouse'.")
# add teleoperation key for env reset
teleop_interface.add_callback("L", env.reset)
# print helper
print(teleop_interface)
# specify directory for logging experiments
log_dir = os.path.join("./logs/robomimic", args_cli.task)
# dump the configuration into log-directory
dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
dump_pickle(os.path.join(log_dir, "params", "env.pkl"), env_cfg)
# create data-collector
collector_interface = RobomimicDataCollector(
env_name=args_cli.task,
directory_path=log_dir,
filename=args_cli.filename,
num_demos=args_cli.num_demos,
flush_freq=env.num_envs,
env_config={"device": args_cli.device},
)
# reset environment
obs_dict, _ = env.reset()
# reset interfaces
teleop_interface.reset()
collector_interface.reset()
# simulate environment -- run everything in inference mode
with contextlib.suppress(KeyboardInterrupt) and torch.inference_mode():
while not collector_interface.is_stopped():
# get keyboard command
delta_pose, gripper_command = teleop_interface.advance()
# convert to torch
delta_pose = torch.tensor(delta_pose, dtype=torch.float, device=env.device).repeat(env.num_envs, 1)
# compute actions based on environment
actions = pre_process_actions(delta_pose, gripper_command)
# TODO: Deal with the case when reset is triggered by teleoperation device.
# The observations need to be recollected.
# store signals before stepping
# -- obs
for key, value in obs_dict["policy"].items():
collector_interface.add(f"obs/{key}", value)
# -- actions
collector_interface.add("actions", actions)
# perform action on environment
obs_dict, rewards, terminated, truncated, info = env.step(actions)
dones = terminated | truncated
# check that simulation is stopped or not
if env.unwrapped.sim.is_stopped():
break
# robomimic only cares about policy observations
# store signals from the environment
# -- next_obs
for key, value in obs_dict["policy"].items():
collector_interface.add(f"next_obs/{key}", value)
# -- rewards
collector_interface.add("rewards", rewards)
# -- dones
collector_interface.add("dones", dones)
# -- is success label
collector_interface.add("success", env.termination_manager.get_term("object_reached_goal"))
# flush data from collector for successful environments
reset_env_ids = dones.nonzero(as_tuple=False).squeeze(-1)
collector_interface.flush(reset_env_ids)
# check if enough data is collected
if collector_interface.is_stopped():
break
# close the simulator
collector_interface.close()
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 7,116 | Python | 38.320442 | 111 | 0.676504 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/robomimic/train.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# MIT License
#
# Copyright (c) 2021 Stanford Vision and Learning Lab
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""
The main entry point for training policies from pre-collected data.
Args:
algo: name of the algorithm to run.
task: name of the environment.
name: if provided, override the experiment name defined in the config
dataset: if provided, override the dataset path defined in the config
This file has been modified from the original version in the following ways:
* Added import of AppLauncher from omni.isaac.orbit.app to resolve the configuration to load for training.
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher
# launch omniverse app
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import argparse
import gymnasium as gym
import json
import numpy as np
import os
import sys
import time
import torch
import traceback
from collections import OrderedDict
from torch.utils.data import DataLoader
import psutil
import robomimic.utils.env_utils as EnvUtils
import robomimic.utils.file_utils as FileUtils
import robomimic.utils.obs_utils as ObsUtils
import robomimic.utils.torch_utils as TorchUtils
import robomimic.utils.train_utils as TrainUtils
from robomimic.algo import RolloutPolicy, algo_factory
from robomimic.config import config_factory
from robomimic.utils.log_utils import DataLogger, PrintLogger
# Needed so that environment is registered
import omni.isaac.orbit_tasks # noqa: F401
def train(config, device):
"""Train a model using the algorithm."""
# first set seeds
np.random.seed(config.train.seed)
torch.manual_seed(config.train.seed)
print("\n============= New Training Run with Config =============")
print(config)
print("")
log_dir, ckpt_dir, video_dir = TrainUtils.get_exp_dir(config)
print(f">>> Saving logs into directory: {log_dir}")
print(f">>> Saving checkpoints into directory: {ckpt_dir}")
print(f">>> Saving videos into directory: {video_dir}")
if config.experiment.logging.terminal_output_to_txt:
# log stdout and stderr to a text file
logger = PrintLogger(os.path.join(log_dir, "log.txt"))
sys.stdout = logger
sys.stderr = logger
# read config to set up metadata for observation modalities (e.g. detecting rgb observations)
ObsUtils.initialize_obs_utils_with_config(config)
# make sure the dataset exists
dataset_path = os.path.expanduser(config.train.data)
if not os.path.exists(dataset_path):
raise FileNotFoundError(f"Dataset at provided path {dataset_path} not found!")
# load basic metadata from training file
print("\n============= Loaded Environment Metadata =============")
env_meta = FileUtils.get_env_metadata_from_dataset(dataset_path=config.train.data)
shape_meta = FileUtils.get_shape_metadata_from_dataset(
dataset_path=config.train.data, all_obs_keys=config.all_obs_keys, verbose=True
)
if config.experiment.env is not None:
env_meta["env_name"] = config.experiment.env
print("=" * 30 + "\n" + "Replacing Env to {}\n".format(env_meta["env_name"]) + "=" * 30)
# create environment
envs = OrderedDict()
if config.experiment.rollout.enabled:
# create environments for validation runs
env_names = [env_meta["env_name"]]
if config.experiment.additional_envs is not None:
for name in config.experiment.additional_envs:
env_names.append(name)
for env_name in env_names:
env = EnvUtils.create_env_from_metadata(
env_meta=env_meta,
env_name=env_name,
render=False,
render_offscreen=config.experiment.render_video,
use_image_obs=shape_meta["use_images"],
)
envs[env.name] = env
print(envs[env.name])
print("")
# setup for a new training run
data_logger = DataLogger(log_dir, config=config, log_tb=config.experiment.logging.log_tb)
model = algo_factory(
algo_name=config.algo_name,
config=config,
obs_key_shapes=shape_meta["all_shapes"],
ac_dim=shape_meta["ac_dim"],
device=device,
)
# save the config as a json file
with open(os.path.join(log_dir, "..", "config.json"), "w") as outfile:
json.dump(config, outfile, indent=4)
print("\n============= Model Summary =============")
print(model) # print model summary
print("")
# load training data
trainset, validset = TrainUtils.load_data_for_training(config, obs_keys=shape_meta["all_obs_keys"])
train_sampler = trainset.get_dataset_sampler()
print("\n============= Training Dataset =============")
print(trainset)
print("")
# maybe retrieve statistics for normalizing observations
obs_normalization_stats = None
if config.train.hdf5_normalize_obs:
obs_normalization_stats = trainset.get_obs_normalization_stats()
# initialize data loaders
train_loader = DataLoader(
dataset=trainset,
sampler=train_sampler,
batch_size=config.train.batch_size,
shuffle=(train_sampler is None),
num_workers=config.train.num_data_workers,
drop_last=True,
)
if config.experiment.validate:
# cap num workers for validation dataset at 1
num_workers = min(config.train.num_data_workers, 1)
valid_sampler = validset.get_dataset_sampler()
valid_loader = DataLoader(
dataset=validset,
sampler=valid_sampler,
batch_size=config.train.batch_size,
shuffle=(valid_sampler is None),
num_workers=num_workers,
drop_last=True,
)
else:
valid_loader = None
# main training loop
best_valid_loss = None
best_return = {k: -np.inf for k in envs} if config.experiment.rollout.enabled else None
best_success_rate = {k: -1.0 for k in envs} if config.experiment.rollout.enabled else None
last_ckpt_time = time.time()
# number of learning steps per epoch (defaults to a full dataset pass)
train_num_steps = config.experiment.epoch_every_n_steps
valid_num_steps = config.experiment.validation_epoch_every_n_steps
for epoch in range(1, config.train.num_epochs + 1): # epoch numbers start at 1
step_log = TrainUtils.run_epoch(model=model, data_loader=train_loader, epoch=epoch, num_steps=train_num_steps)
model.on_epoch_end(epoch)
# setup checkpoint path
epoch_ckpt_name = f"model_epoch_{epoch}"
# check for recurring checkpoint saving conditions
should_save_ckpt = False
if config.experiment.save.enabled:
time_check = (config.experiment.save.every_n_seconds is not None) and (
time.time() - last_ckpt_time > config.experiment.save.every_n_seconds
)
epoch_check = (
(config.experiment.save.every_n_epochs is not None)
and (epoch > 0)
and (epoch % config.experiment.save.every_n_epochs == 0)
)
epoch_list_check = epoch in config.experiment.save.epochs
should_save_ckpt = time_check or epoch_check or epoch_list_check
ckpt_reason = None
if should_save_ckpt:
last_ckpt_time = time.time()
ckpt_reason = "time"
print(f"Train Epoch {epoch}")
print(json.dumps(step_log, sort_keys=True, indent=4))
for k, v in step_log.items():
if k.startswith("Time_"):
data_logger.record(f"Timing_Stats/Train_{k[5:]}", v, epoch)
else:
data_logger.record(f"Train/{k}", v, epoch)
# Evaluate the model on validation set
if config.experiment.validate:
with torch.no_grad():
step_log = TrainUtils.run_epoch(
model=model, data_loader=valid_loader, epoch=epoch, validate=True, num_steps=valid_num_steps
)
for k, v in step_log.items():
if k.startswith("Time_"):
data_logger.record(f"Timing_Stats/Valid_{k[5:]}", v, epoch)
else:
data_logger.record(f"Valid/{k}", v, epoch)
print(f"Validation Epoch {epoch}")
print(json.dumps(step_log, sort_keys=True, indent=4))
# save checkpoint if achieve new best validation loss
valid_check = "Loss" in step_log
if valid_check and (best_valid_loss is None or (step_log["Loss"] <= best_valid_loss)):
best_valid_loss = step_log["Loss"]
if config.experiment.save.enabled and config.experiment.save.on_best_validation:
epoch_ckpt_name += f"_best_validation_{best_valid_loss}"
should_save_ckpt = True
ckpt_reason = "valid" if ckpt_reason is None else ckpt_reason
# Evaluate the model by by running rollouts
# do rollouts at fixed rate or if it's time to save a new ckpt
video_paths = None
rollout_check = (epoch % config.experiment.rollout.rate == 0) or (should_save_ckpt and ckpt_reason == "time")
if config.experiment.rollout.enabled and (epoch > config.experiment.rollout.warmstart) and rollout_check:
# wrap model as a RolloutPolicy to prepare for rollouts
rollout_model = RolloutPolicy(model, obs_normalization_stats=obs_normalization_stats)
num_episodes = config.experiment.rollout.n
all_rollout_logs, video_paths = TrainUtils.rollout_with_stats(
policy=rollout_model,
envs=envs,
horizon=config.experiment.rollout.horizon,
use_goals=config.use_goals,
num_episodes=num_episodes,
render=False,
video_dir=video_dir if config.experiment.render_video else None,
epoch=epoch,
video_skip=config.experiment.get("video_skip", 5),
terminate_on_success=config.experiment.rollout.terminate_on_success,
)
# summarize results from rollouts to tensorboard and terminal
for env_name in all_rollout_logs:
rollout_logs = all_rollout_logs[env_name]
for k, v in rollout_logs.items():
if k.startswith("Time_"):
data_logger.record(f"Timing_Stats/Rollout_{env_name}_{k[5:]}", v, epoch)
else:
data_logger.record(f"Rollout/{k}/{env_name}", v, epoch, log_stats=True)
print("\nEpoch {} Rollouts took {}s (avg) with results:".format(epoch, rollout_logs["time"]))
print(f"Env: {env_name}")
print(json.dumps(rollout_logs, sort_keys=True, indent=4))
# checkpoint and video saving logic
updated_stats = TrainUtils.should_save_from_rollout_logs(
all_rollout_logs=all_rollout_logs,
best_return=best_return,
best_success_rate=best_success_rate,
epoch_ckpt_name=epoch_ckpt_name,
save_on_best_rollout_return=config.experiment.save.on_best_rollout_return,
save_on_best_rollout_success_rate=config.experiment.save.on_best_rollout_success_rate,
)
best_return = updated_stats["best_return"]
best_success_rate = updated_stats["best_success_rate"]
epoch_ckpt_name = updated_stats["epoch_ckpt_name"]
should_save_ckpt = (
config.experiment.save.enabled and updated_stats["should_save_ckpt"]
) or should_save_ckpt
if updated_stats["ckpt_reason"] is not None:
ckpt_reason = updated_stats["ckpt_reason"]
# Only keep saved videos if the ckpt should be saved (but not because of validation score)
should_save_video = (should_save_ckpt and (ckpt_reason != "valid")) or config.experiment.keep_all_videos
if video_paths is not None and not should_save_video:
for env_name in video_paths:
os.remove(video_paths[env_name])
# Save model checkpoints based on conditions (success rate, validation loss, etc)
if should_save_ckpt:
TrainUtils.save_model(
model=model,
config=config,
env_meta=env_meta,
shape_meta=shape_meta,
ckpt_path=os.path.join(ckpt_dir, epoch_ckpt_name + ".pth"),
obs_normalization_stats=obs_normalization_stats,
)
# Finally, log memory usage in MB
process = psutil.Process(os.getpid())
mem_usage = int(process.memory_info().rss / 1000000)
data_logger.record("System/RAM Usage (MB)", mem_usage, epoch)
print(f"\nEpoch {epoch} Memory Usage: {mem_usage} MB\n")
# terminate logging
data_logger.close()
def main(args):
"""Train a model on a task using a specified algorithm."""
# load config
if args.task is not None:
# obtain the configuration entry point
cfg_entry_point_key = f"robomimic_{args.algo}_cfg_entry_point"
print(f"Loading configuration for task: {args.task}")
cfg_entry_point_file = gym.spec(args.task).kwargs.pop(cfg_entry_point_key)
# check if entry point exists
if cfg_entry_point_file is None:
raise ValueError(
f"Could not find configuration for the environment: '{args.task}'."
f" Please check that the gym registry has the entry point: '{cfg_entry_point_key}'."
)
# load config from json file
with open(cfg_entry_point_file) as f:
ext_cfg = json.load(f)
config = config_factory(ext_cfg["algo_name"])
# update config with external json - this will throw errors if
# the external config has keys not present in the base algo config
with config.values_unlocked():
config.update(ext_cfg)
else:
raise ValueError("Please provide a task name through CLI arguments.")
if args.dataset is not None:
config.train.data = args.dataset
if args.name is not None:
config.experiment.name = args.name
# change location of experiment directory
config.train.output_dir = os.path.abspath(os.path.join("./logs/robomimic", args.task))
# get torch device
device = TorchUtils.get_torch_device(try_to_use_cuda=config.train.cuda)
config.lock()
# catch error during training and print it
res_str = "finished run successfully!"
try:
train(config, device=device)
except Exception as e:
res_str = f"run failed with error:\n{e}\n\n{traceback.format_exc()}"
print(res_str)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
# Experiment Name (for tensorboard, saving models, etc.)
parser.add_argument(
"--name",
type=str,
default=None,
help="(optional) if provided, override the experiment name defined in the config",
)
# Dataset path, to override the one in the config
parser.add_argument(
"--dataset",
type=str,
default=None,
help="(optional) if provided, override the dataset path defined in the config",
)
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--algo", type=str, default=None, help="Name of the algorithm.")
args = parser.parse_args()
# run training
main(args)
# close sim app
simulation_app.close()
| 16,901 | Python | 38.957447 | 118 | 0.633809 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/robomimic/tools/episode_merging.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Tool to merge multiple episodes with single trajectory into one episode with multiple trajectories."""
from __future__ import annotations
import argparse
import h5py
import json
import os
if __name__ == "__main__":
# parse arguments
parser = argparse.ArgumentParser(description="Merge multiple episodes with single trajectory into one episode.")
parser.add_argument(
"--dir", type=str, default=None, help="Path to directory that contains all single episode hdf5 files"
)
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--out", type=str, default="merged_dataset.hdf5", help="output hdf5 file")
args_cli = parser.parse_args()
# read arguments
parent_dir = args_cli.dir
merged_dataset_name = args_cli.out
task_name = args_cli.task
# check valid task name
if task_name is None:
raise ValueError("Please specify a valid task name.")
# get hdf5 entries from specified directory
entries = [i for i in os.listdir(parent_dir) if i.endswith(".hdf5")]
# create new hdf5 file for merging episodes
fp = h5py.File(parent_dir + merged_dataset_name, "a")
# initiate data group
f_grp = fp.create_group("data")
f_grp.attrs["num_samples"] = 0
# merge all episodes
for count, entry in enumerate(entries):
fc = h5py.File(parent_dir + entry, "r")
# find total number of samples in all demos
f_grp.attrs["num_samples"] = f_grp.attrs["num_samples"] + fc["data"]["demo_0"].attrs["num_samples"]
fc.copy("data/demo_0", fp["data"], "demo_" + str(count))
# This is needed to run env in robomimic
fp["data"].attrs["env_args"] = json.dumps({"env_name": task_name, "type": 2, "env_kwargs": {}})
fp.close()
print("merged")
| 1,934 | Python | 32.362068 | 116 | 0.661324 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/robomimic/tools/inspect_demonstrations.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Tool to check structure of hdf5 files."""
from __future__ import annotations
import argparse
import h5py
def check_group(f, num: int):
"""Print the data from different keys in stored dictionary."""
# print name of the group first
for subs in f:
if isinstance(subs, str):
print("\t" * num, subs, ":", type(f[subs]))
check_group(f[subs], num + 1)
# print attributes of the group
print("\t" * num, "attributes", ":")
for attr in f.attrs:
print("\t" * (num + 1), attr, ":", type(f.attrs[attr]), ":", f.attrs[attr])
if __name__ == "__main__":
# parse arguments
parser = argparse.ArgumentParser(description="Check structure of hdf5 file.")
parser.add_argument("file", type=str, default=None, help="The path to HDF5 file to analyze.")
args_cli = parser.parse_args()
# open specified file
with h5py.File(args_cli.file, "r") as f:
# print name of the file first
print(f)
# print contents of file
check_group(f["data"], 1)
| 1,166 | Python | 28.923076 | 97 | 0.614923 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/robomimic/tools/split_train_val.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# MIT License
#
# Copyright (c) 2021 Stanford Vision and Learning Lab
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
"""
Script for splitting a dataset hdf5 file into training and validation trajectories.
Args:
dataset: path to hdf5 dataset
filter_key: if provided, split the subset of trajectories
in the file that correspond to this filter key into a training
and validation set of trajectories, instead of splitting the
full set of trajectories
ratio: validation ratio, in (0, 1). Defaults to 0.1, which is 10%.
Example usage:
python split_train_val.py --dataset /path/to/demo.hdf5 --ratio 0.1
"""
from __future__ import annotations
import argparse
import h5py
import numpy as np
from robomimic.utils.file_utils import create_hdf5_filter_key
def split_train_val_from_hdf5(hdf5_path: str, val_ratio=0.1, filter_key=None):
"""
Splits data into training set and validation set from HDF5 file.
Args:
hdf5_path: path to the hdf5 file to load the transitions from
val_ratio: ratio of validation demonstrations to all demonstrations
filter_key: if provided, split the subset of demonstration keys stored
under mask/@filter_key instead of the full set of demonstrations
"""
# retrieve number of demos
f = h5py.File(hdf5_path, "r")
if filter_key is not None:
print(f"Using filter key: {filter_key}")
demos = sorted(elem.decode("utf-8") for elem in np.array(f[f"mask/{filter_key}"]))
else:
demos = sorted(list(f["data"].keys()))
num_demos = len(demos)
f.close()
# get random split
num_demos = len(demos)
num_val = int(val_ratio * num_demos)
mask = np.zeros(num_demos)
mask[:num_val] = 1.0
np.random.shuffle(mask)
mask = mask.astype(int)
train_inds = (1 - mask).nonzero()[0]
valid_inds = mask.nonzero()[0]
train_keys = [demos[i] for i in train_inds]
valid_keys = [demos[i] for i in valid_inds]
print(f"{num_val} validation demonstrations out of {num_demos} total demonstrations.")
# pass mask to generate split
name_1 = "train"
name_2 = "valid"
if filter_key is not None:
name_1 = f"{filter_key}_{name_1}"
name_2 = f"{filter_key}_{name_2}"
train_lengths = create_hdf5_filter_key(hdf5_path=hdf5_path, demo_keys=train_keys, key_name=name_1)
valid_lengths = create_hdf5_filter_key(hdf5_path=hdf5_path, demo_keys=valid_keys, key_name=name_2)
print(f"Total number of train samples: {np.sum(train_lengths)}")
print(f"Average number of train samples {np.mean(train_lengths)}")
print(f"Total number of valid samples: {np.sum(valid_lengths)}")
print(f"Average number of valid samples {np.mean(valid_lengths)}")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("dataset", type=str, help="path to hdf5 dataset")
parser.add_argument(
"--filter_key",
type=str,
default=None,
help=(
"If provided, split the subset of trajectories in the file that correspond to this filter key"
" into a training and validation set of trajectories, instead of splitting the full set of"
" trajectories."
),
)
parser.add_argument("--ratio", type=float, default=0.1, help="validation ratio, in (0, 1)")
args = parser.parse_args()
# seed to make sure results are consistent
np.random.seed(0)
split_train_val_from_hdf5(args.dataset, val_ratio=args.ratio, filter_key=args.filter_key)
| 4,685 | Python | 36.190476 | 106 | 0.690288 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/sb3/play.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to play a checkpoint if an RL agent from Stable-Baselines3."""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Play a checkpoint of an RL agent from Stable-Baselines3.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--checkpoint", type=str, default=None, help="Path to model checkpoint.")
parser.add_argument(
"--use_last_checkpoint",
action="store_true",
help="When no checkpoint provided, use the last saved model. Otherwise use the best saved model.",
)
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import numpy as np
import os
import torch
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import VecNormalize
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils.parse_cfg import get_checkpoint_path, load_cfg_from_registry, parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
def main():
"""Play with stable-baselines agent."""
# parse configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
agent_cfg = load_cfg_from_registry(args_cli.task, "sb3_cfg_entry_point")
# post-process agent configuration
agent_cfg = process_sb3_cfg(agent_cfg)
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg)
# wrap around environment for stable baselines
env = Sb3VecEnvWrapper(env)
# normalize environment (if needed)
if "normalize_input" in agent_cfg:
env = VecNormalize(
env,
training=True,
norm_obs="normalize_input" in agent_cfg and agent_cfg.pop("normalize_input"),
norm_reward="normalize_value" in agent_cfg and agent_cfg.pop("normalize_value"),
clip_obs="clip_obs" in agent_cfg and agent_cfg.pop("clip_obs"),
gamma=agent_cfg["gamma"],
clip_reward=np.inf,
)
# directory for logging into
log_root_path = os.path.join("logs", "sb3", args_cli.task)
log_root_path = os.path.abspath(log_root_path)
# check checkpoint is valid
if args_cli.checkpoint is None:
if args_cli.use_last_checkpoint:
checkpoint = "model_.*.zip"
else:
checkpoint = "model.zip"
checkpoint_path = get_checkpoint_path(log_root_path, ".*", checkpoint)
else:
checkpoint_path = args_cli.checkpoint
# create agent from stable baselines
print(f"Loading checkpoint from: {checkpoint_path}")
agent = PPO.load(checkpoint_path, env, print_system_info=True)
# reset environment
obs = env.reset()
# simulate environment
while simulation_app.is_running():
# run everything in inference mode
with torch.inference_mode():
# agent stepping
actions, _ = agent.predict(obs, deterministic=True)
# env stepping
obs, _, _, _ = env.step(actions)
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 4,014 | Python | 33.025423 | 115 | 0.680867 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/sb3/train.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to train RL agent with Stable Baselines3.
Since Stable-Baselines3 does not support buffers living on GPU directly,
we recommend using smaller number of environments. Otherwise,
there will be significant overhead in GPU->CPU transfer.
"""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Train an RL agent with Stable-Baselines3.")
parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import numpy as np
import os
from datetime import datetime
from stable_baselines3 import PPO
from stable_baselines3.common.callbacks import CheckpointCallback
from stable_baselines3.common.logger import configure
from stable_baselines3.common.vec_env import VecNormalize
from omni.isaac.orbit.utils.dict import print_dict
from omni.isaac.orbit.utils.io import dump_pickle, dump_yaml
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import load_cfg_from_registry, parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
def main():
"""Train with stable-baselines agent."""
# parse configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
agent_cfg = load_cfg_from_registry(args_cli.task, "sb3_cfg_entry_point")
# override configuration with command line arguments
if args_cli.seed is not None:
agent_cfg["seed"] = args_cli.seed
# directory for logging into
log_dir = os.path.join("logs", "sb3", args_cli.task, datetime.now().strftime("%Y-%m-%d_%H-%M-%S"))
# dump the configuration into log-directory
dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)
dump_pickle(os.path.join(log_dir, "params", "env.pkl"), env_cfg)
dump_pickle(os.path.join(log_dir, "params", "agent.pkl"), agent_cfg)
# post-process agent configuration
agent_cfg = process_sb3_cfg(agent_cfg)
# read configurations about the agent-training
policy_arch = agent_cfg.pop("policy")
n_timesteps = agent_cfg.pop("n_timesteps")
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
# wrap for video recording
if args_cli.video:
video_kwargs = {
"video_folder": os.path.join(log_dir, "videos"),
"step_trigger": lambda step: step % args_cli.video_interval == 0,
"video_length": args_cli.video_length,
"disable_logger": True,
}
print("[INFO] Recording videos during training.")
print_dict(video_kwargs, nesting=4)
env = gym.wrappers.RecordVideo(env, **video_kwargs)
# wrap around environment for stable baselines
env = Sb3VecEnvWrapper(env)
# set the seed
env.seed(seed=agent_cfg["seed"])
if "normalize_input" in agent_cfg:
env = VecNormalize(
env,
training=True,
norm_obs="normalize_input" in agent_cfg and agent_cfg.pop("normalize_input"),
norm_reward="normalize_value" in agent_cfg and agent_cfg.pop("normalize_value"),
clip_obs="clip_obs" in agent_cfg and agent_cfg.pop("clip_obs"),
gamma=agent_cfg["gamma"],
clip_reward=np.inf,
)
# create agent from stable baselines
agent = PPO(policy_arch, env, verbose=1, **agent_cfg)
# configure the logger
new_logger = configure(log_dir, ["stdout", "tensorboard"])
agent.set_logger(new_logger)
# callbacks for agent
checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=log_dir, name_prefix="model", verbose=2)
# train the agent
agent.learn(total_timesteps=n_timesteps, callback=checkpoint_callback)
# save the final model
agent.save(os.path.join(log_dir, "model"))
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 5,362 | Python | 37.307143 | 117 | 0.696382 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/rl_games/play.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to play a checkpoint if an RL agent from RL-Games."""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Play a checkpoint of an RL agent from RL-Games.")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--checkpoint", type=str, default=None, help="Path to model checkpoint.")
parser.add_argument(
"--use_last_checkpoint",
action="store_true",
help="When no checkpoint provided, use the last saved model. Otherwise use the best saved model.",
)
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import math
import os
import torch
from rl_games.common import env_configurations, vecenv
from rl_games.common.player import BasePlayer
from rl_games.torch_runner import Runner
from omni.isaac.orbit.utils.assets import retrieve_file_path
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import get_checkpoint_path, load_cfg_from_registry, parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.rl_games import RlGamesGpuEnv, RlGamesVecEnvWrapper
def main():
"""Play with RL-Games agent."""
# parse env configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
agent_cfg = load_cfg_from_registry(args_cli.task, "rl_games_cfg_entry_point")
# wrap around environment for rl-games
rl_device = agent_cfg["params"]["config"]["device"]
clip_obs = agent_cfg["params"]["env"].get("clip_observations", math.inf)
clip_actions = agent_cfg["params"]["env"].get("clip_actions", math.inf)
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg)
# wrap around environment for rl-games
env = RlGamesVecEnvWrapper(env, rl_device, clip_obs, clip_actions)
# register the environment to rl-games registry
# note: in agents configuration: environment name must be "rlgpu"
vecenv.register(
"IsaacRlgWrapper", lambda config_name, num_actors, **kwargs: RlGamesGpuEnv(config_name, num_actors, **kwargs)
)
env_configurations.register("rlgpu", {"vecenv_type": "IsaacRlgWrapper", "env_creator": lambda **kwargs: env})
# specify directory for logging experiments
log_root_path = os.path.join("logs", "rl_games", agent_cfg["params"]["config"]["name"])
log_root_path = os.path.abspath(log_root_path)
print(f"[INFO] Loading experiment from directory: {log_root_path}")
# find checkpoint
if args_cli.checkpoint is None:
# specify directory for logging runs
run_dir = agent_cfg["params"]["config"].get("full_experiment_name", ".*")
# specify name of checkpoint
if args_cli.use_last_checkpoint:
checkpoint_file = ".*"
else:
# this loads the best checkpoint
checkpoint_file = f"{agent_cfg['params']['config']['name']}.pth"
# get path to previous checkpoint
resume_path = get_checkpoint_path(log_root_path, run_dir, checkpoint_file, other_dirs=["nn"])
else:
resume_path = retrieve_file_path(args_cli.checkpoint)
# load previously trained model
agent_cfg["params"]["load_checkpoint"] = True
agent_cfg["params"]["load_path"] = resume_path
print(f"[INFO]: Loading model checkpoint from: {agent_cfg['params']['load_path']}")
# set number of actors into agent config
agent_cfg["params"]["config"]["num_actors"] = env.unwrapped.num_envs
# create runner from rl-games
runner = Runner()
runner.load(agent_cfg)
# obtain the agent from the runner
agent: BasePlayer = runner.create_player()
agent.restore(resume_path)
agent.reset()
# reset environment
obs = env.reset()
# required: enables the flag for batched observations
_ = agent.get_batch_size(obs, 1)
# simulate environment
# note: We simplified the logic in rl-games player.py (:func:`BasePlayer.run()`) function in an
# attempt to have complete control over environment stepping. However, this removes other
# operations such as masking that is used for multi-agent learning by RL-Games.
while simulation_app.is_running():
# run everything in inference mode
with torch.inference_mode():
# convert obs to agent format
obs = agent.obs_to_torch(obs)
# agent stepping
actions = agent.get_action(obs, is_deterministic=True)
# env stepping
obs, _, dones, _ = env.step(actions)
# perform operations for terminated episodes
if len(dones) > 0:
# reset rnn state for terminated episodes
if agent.is_rnn and agent.states is not None:
for s in agent.states:
s[:, dones, :] = 0.0
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 5,785 | Python | 37.573333 | 117 | 0.676059 |
NVIDIA-Omniverse/orbit/source/standalone/workflows/rl_games/train.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
"""Script to train RL agent with RL-Games."""
from __future__ import annotations
"""Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.orbit.app import AppLauncher
# add argparse arguments
parser = argparse.ArgumentParser(description="Train an RL agent with RL-Games.")
parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
parser.add_argument("--cpu", action="store_true", default=False, help="Use CPU pipeline.")
parser.add_argument(
"--disable_fabric", action="store_true", default=False, help="Disable fabric and use USD I/O operations."
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
# append AppLauncher cli args
AppLauncher.add_app_launcher_args(parser)
# parse the arguments
args_cli = parser.parse_args()
# launch omniverse app
app_launcher = AppLauncher(args_cli)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import math
import os
from datetime import datetime
from rl_games.common import env_configurations, vecenv
from rl_games.common.algo_observer import IsaacAlgoObserver
from rl_games.torch_runner import Runner
from omni.isaac.orbit.utils.dict import print_dict
from omni.isaac.orbit.utils.io import dump_pickle, dump_yaml
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import load_cfg_from_registry, parse_env_cfg
from omni.isaac.orbit_tasks.utils.wrappers.rl_games import RlGamesGpuEnv, RlGamesVecEnvWrapper
def main():
"""Train with RL-Games agent."""
# parse seed from command line
args_cli_seed = args_cli.seed
# parse configuration
env_cfg = parse_env_cfg(
args_cli.task, use_gpu=not args_cli.cpu, num_envs=args_cli.num_envs, use_fabric=not args_cli.disable_fabric
)
agent_cfg = load_cfg_from_registry(args_cli.task, "rl_games_cfg_entry_point")
# override from command line
if args_cli_seed is not None:
agent_cfg["params"]["seed"] = args_cli_seed
# specify directory for logging experiments
log_root_path = os.path.join("logs", "rl_games", agent_cfg["params"]["config"]["name"])
log_root_path = os.path.abspath(log_root_path)
print(f"[INFO] Logging experiment in directory: {log_root_path}")
# specify directory for logging runs
log_dir = agent_cfg["params"]["config"].get("full_experiment_name", datetime.now().strftime("%Y-%m-%d_%H-%M-%S"))
# set directory into agent config
# logging directory path: <train_dir>/<full_experiment_name>
agent_cfg["params"]["config"]["train_dir"] = log_root_path
agent_cfg["params"]["config"]["full_experiment_name"] = log_dir
# dump the configuration into log-directory
dump_yaml(os.path.join(log_root_path, log_dir, "params", "env.yaml"), env_cfg)
dump_yaml(os.path.join(log_root_path, log_dir, "params", "agent.yaml"), agent_cfg)
dump_pickle(os.path.join(log_root_path, log_dir, "params", "env.pkl"), env_cfg)
dump_pickle(os.path.join(log_root_path, log_dir, "params", "agent.pkl"), agent_cfg)
# read configurations about the agent-training
rl_device = agent_cfg["params"]["config"]["device"]
clip_obs = agent_cfg["params"]["env"].get("clip_observations", math.inf)
clip_actions = agent_cfg["params"]["env"].get("clip_actions", math.inf)
# create isaac environment
env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
# wrap for video recording
if args_cli.video:
video_kwargs = {
"video_folder": os.path.join(log_dir, "videos"),
"step_trigger": lambda step: step % args_cli.video_interval == 0,
"video_length": args_cli.video_length,
"disable_logger": True,
}
print("[INFO] Recording videos during training.")
print_dict(video_kwargs, nesting=4)
env = gym.wrappers.RecordVideo(env, **video_kwargs)
# wrap around environment for rl-games
env = RlGamesVecEnvWrapper(env, rl_device, clip_obs, clip_actions)
# register the environment to rl-games registry
# note: in agents configuration: environment name must be "rlgpu"
vecenv.register(
"IsaacRlgWrapper", lambda config_name, num_actors, **kwargs: RlGamesGpuEnv(config_name, num_actors, **kwargs)
)
env_configurations.register("rlgpu", {"vecenv_type": "IsaacRlgWrapper", "env_creator": lambda **kwargs: env})
# set number of actors into agent config
agent_cfg["params"]["config"]["num_actors"] = env.unwrapped.num_envs
# create runner from rl-games
runner = Runner(IsaacAlgoObserver())
runner.load(agent_cfg)
# set seed of the env
env.seed(agent_cfg["params"]["seed"])
# reset the agent and env
runner.reset()
# train the agent
runner.run({"train": True, "play": False, "sigma": None})
# close the simulator
env.close()
if __name__ == "__main__":
# run the main function
main()
# close sim app
simulation_app.close()
| 5,558 | Python | 39.576642 | 117 | 0.692155 |
NVIDIA-Omniverse/orbit/docker/docker-compose.yaml | # Here we set the parts that would
# be re-used between services to an
# extension field
# https://docs.docker.com/compose/compose-file/compose-file-v3/#extension-fields
x-default-orbit-volumes:
&default-orbit-volumes
# These volumes follow from this page
# https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/install_faq.html#save-isaac-sim-configs-on-local-disk
- type: volume
source: isaac-cache-kit
target: ${DOCKER_ISAACSIM_ROOT_PATH}/kit/cache
- type: volume
source: isaac-cache-ov
target: ${DOCKER_USER_HOME}/.cache/ov
- type: volume
source: isaac-cache-pip
target: ${DOCKER_USER_HOME}/.cache/pip
- type: volume
source: isaac-cache-gl
target: ${DOCKER_USER_HOME}/.cache/nvidia/GLCache
- type: volume
source: isaac-cache-compute
target: ${DOCKER_USER_HOME}/.nv/ComputeCache
- type: volume
source: isaac-logs
target: ${DOCKER_USER_HOME}/.nvidia-omniverse/logs
- type: volume
source: isaac-carb-logs
target: ${DOCKER_ISAACSIM_ROOT_PATH}/kit/logs/Kit/Isaac-Sim
- type: volume
source: isaac-data
target: ${DOCKER_USER_HOME}/.local/share/ov/data
- type: volume
source: isaac-docs
target: ${DOCKER_USER_HOME}/Documents
# These volumes allow X11 Forwarding
# We currently comment these out because they can
# cause bugs and warnings for people uninterested in
# X11 Forwarding from within the docker. We keep them
# as comments as a convenience for those seeking X11
# forwarding until a scripted solution is developed
# - type: bind
# source: /tmp/.X11-unix
# target: /tmp/.X11-unix
# - type: bind
# source: ${HOME}/.Xauthority
# target: ${DOCKER_USER_HOME}/.Xauthority
# This overlay allows changes on the local files to
# be reflected within the container immediately
- type: bind
source: ../source
target: /workspace/orbit/source
- type: bind
source: ../docs
target: /workspace/orbit/docs
# The effect of these volumes is twofold:
# 1. Prevent root-owned files from flooding the _build and logs dir
# on the host machine
# 2. Preserve the artifacts in persistent volumes for later copying
# to the host machine
- type: volume
source: orbit-docs
target: /workspace/orbit/docs/_build
- type: volume
source: orbit-logs
target: /workspace/orbit/logs
- type: volume
source: orbit-data
target: /workspace/orbit/data_storage
x-default-orbit-deploy:
&default-orbit-deploy
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [ gpu ]
services:
# This service is the base Orbit image
orbit-base:
profiles: ["base"]
env_file: .env.base
build:
context: ../
dockerfile: docker/Dockerfile.base
args:
- ISAACSIM_VERSION=${ISAACSIM_VERSION}
- ISAACSIM_ROOT_PATH=${DOCKER_ISAACSIM_ROOT_PATH}
- ORBIT_PATH=${DOCKER_ORBIT_PATH}
- DOCKER_USER_HOME=${DOCKER_USER_HOME}
image: orbit-base
container_name: orbit-base
environment:
# We can't just define this in the .env file because shell envars take precedence
# https://docs.docker.com/compose/environment-variables/envvars-precedence/
- ISAACSIM_PATH=${DOCKER_ORBIT_PATH}/_isaac_sim
- ORBIT_PATH=${DOCKER_ORBIT_PATH}
# This should also be enabled for X11 forwarding
# - DISPLAY=${DISPLAY}
volumes: *default-orbit-volumes
network_mode: host
deploy: *default-orbit-deploy
# This is the entrypoint for the container
entrypoint: bash
stdin_open: true
tty: true
# This service adds a ROS2 Humble
# installation on top of the base image
orbit-ros2:
profiles: ["ros2"]
env_file:
- .env.base
- .env.ros2
build:
context: ../
dockerfile: docker/Dockerfile.ros2
args:
# ROS2_APT_PACKAGE will default to NONE. This is to
# avoid a warning message when building only the base profile
# with the .env.base file
- ROS2_APT_PACKAGE=${ROS2_APT_PACKAGE:-NONE}
- DOCKER_USER_HOME=${DOCKER_USER_HOME}
image: orbit-ros2
container_name: orbit-ros2
environment:
- ISAACSIM_PATH=${DOCKER_ORBIT_PATH}/_isaac_sim
- ORBIT_PATH=${DOCKER_ORBIT_PATH}
volumes: *default-orbit-volumes
network_mode: host
deploy: *default-orbit-deploy
# This is the entrypoint for the container
entrypoint: bash
stdin_open: true
tty: true
volumes:
# isaac-sim
isaac-cache-kit:
isaac-cache-ov:
isaac-cache-pip:
isaac-cache-gl:
isaac-cache-compute:
isaac-logs:
isaac-carb-logs:
isaac-data:
isaac-docs:
# orbit
orbit-docs:
orbit-logs:
orbit-data:
| 4,881 | YAML | 30.701299 | 119 | 0.646589 |
NVIDIA-Omniverse/orbit/docs/README.md | # Building Documentation
We use [Sphinx](https://www.sphinx-doc.org/en/master/) with the [Book Theme](https://sphinx-book-theme.readthedocs.io/en/stable/) for maintaining the documentation.
> **Note:** To build the documentation, we recommend creating a virtual environment to avoid any conflicts with system installed dependencies.
Execute the following instructions to build the documentation (assumed from the top of the repository):
1. Install the dependencies for [Sphinx](https://www.sphinx-doc.org/en/master/):
```bash
# enter the location where this readme exists
cd docs
# install dependencies
pip install -r requirements.txt
```
2. Generate the documentation file via:
```bash
# make the html version
make html
```
3. The documentation is now available at `docs/_build/html/index.html`:
```bash
# open on default browser
xdg-open _build/html/index.html
```
| 932 | Markdown | 29.096773 | 164 | 0.714592 |
NVIDIA-Omniverse/orbit/docs/index.rst | Overview
========
**Orbit** is a unified and modular framework for robot learning that aims to simplify common workflows
in robotics research (such as RL, learning from demonstrations, and motion planning). It is built upon
`NVIDIA Isaac Sim`_ to leverage the latest simulation capabilities for photo-realistic scenes, and fast
and efficient simulation. The core objectives of the framework are:
- **Modularity**: Easily customize and add new environments, robots, and sensors.
- **Agility**: Adapt to the changing needs of the community.
- **Openness**: Remain open-sourced to allow the community to contribute and extend the framework.
- **Battery-included**: Include a number of environments, sensors, and tasks that are ready to use.
For more information about the framework, please refer to the `paper <https://arxiv.org/abs/2301.04195>`_
:cite:`mittal2023orbit`. For clarifications on NVIDIA Isaac ecosystem, please check out the
:doc:`/source/setup/faq` section.
.. figure:: source/_static/tasks.jpg
:width: 100%
:alt: Example tasks created using orbit
Citing
======
If you use Orbit in your research, please use the following BibTeX entry:
.. code:: bibtex
@article{mittal2023orbit,
author={Mittal, Mayank and Yu, Calvin and Yu, Qinxi and Liu, Jingzhou and Rudin, Nikita and Hoeller, David and Yuan, Jia Lin and Singh, Ritvik and Guo, Yunrong and Mazhar, Hammad and Mandlekar, Ajay and Babich, Buck and State, Gavriel and Hutter, Marco and Garg, Animesh},
journal={IEEE Robotics and Automation Letters},
title={Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments},
year={2023},
volume={8},
number={6},
pages={3740-3747},
doi={10.1109/LRA.2023.3270034}
}
License
=======
NVIDIA Isaac Sim is provided under the NVIDIA End User License Agreement. However, the
Orbit framework is open-sourced under the BSD-3-Clause license.
Please refer to :ref:`license` for more details.
Table of Contents
=================
.. toctree::
:maxdepth: 2
:caption: Getting Started
source/setup/installation
source/setup/developer
source/setup/sample
source/setup/template
source/setup/faq
.. toctree::
:maxdepth: 2
:caption: Features
source/features/environments
source/features/actuators
.. source/features/motion_generators
.. toctree::
:maxdepth: 1
:caption: Resources
:titlesonly:
source/tutorials/index
source/how-to/index
source/deployment/index
.. toctree::
:maxdepth: 1
:caption: Source API
source/api/index
.. toctree::
:maxdepth: 1
:caption: References
source/refs/migration
source/refs/contributing
source/refs/troubleshooting
source/refs/issues
source/refs/changelog
source/refs/license
source/refs/bibliography
.. toctree::
:hidden:
:caption: Project Links
GitHub <https://github.com/NVIDIA-Omniverse/orbit>
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. _NVIDIA Isaac Sim: https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/overview.html
| 3,113 | reStructuredText | 26.803571 | 278 | 0.716672 |
NVIDIA-Omniverse/orbit/docs/conf.py | # Copyright (c) 2022-2024, The ORBIT Project Developers.
# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath("../source/extensions/omni.isaac.orbit"))
sys.path.insert(0, os.path.abspath("../source/extensions/omni.isaac.orbit/omni/isaac/orbit"))
sys.path.insert(0, os.path.abspath("../source/extensions/omni.isaac.orbit_tasks"))
sys.path.insert(0, os.path.abspath("../source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks"))
# -- Project information -----------------------------------------------------
project = "orbit"
copyright = "2022-2024, The ORBIT Project Developers."
author = "The ORBIT Project Developers."
version = "0.2.0"
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"autodocsumm",
"myst_parser",
"sphinx.ext.napoleon",
"sphinxemoji.sphinxemoji",
"sphinx.ext.autodoc",
"sphinx.ext.autosummary",
"sphinx.ext.githubpages",
"sphinx.ext.intersphinx",
"sphinx.ext.mathjax",
"sphinx.ext.todo",
"sphinx.ext.viewcode",
"sphinxcontrib.bibtex",
"sphinx_copybutton",
"sphinx_design",
]
# mathjax hacks
mathjax3_config = {
"tex": {
"inlineMath": [["\\(", "\\)"]],
"displayMath": [["\\[", "\\]"]],
},
}
# panels hacks
panels_add_bootstrap_css = False
panels_add_fontawesome_css = True
# supported file extensions for source files
source_suffix = {
".rst": "restructuredtext",
".md": "markdown",
}
# make sure we don't have any unknown references
# TODO: Enable this by default once we have fixed all the warnings
# nitpicky = True
# put type hints inside the signature instead of the description (easier to maintain)
autodoc_typehints = "signature"
# autodoc_typehints_format = "fully-qualified"
# document class *and* __init__ methods
autoclass_content = "class" #
# separate class docstring from __init__ docstring
autodoc_class_signature = "separated"
# sort members by source order
autodoc_member_order = "bysource"
# inherit docstrings from base classes
autodoc_inherit_docstrings = True
# BibTeX configuration
bibtex_bibfiles = ["source/_static/refs.bib"]
# generate autosummary even if no references
autosummary_generate = True
autosummary_generate_overwrite = False
# default autodoc settings
autodoc_default_options = {
"autosummary": True,
}
# generate links to the documentation of objects in external projects
intersphinx_mapping = {
"python": ("https://docs.python.org/3", None),
"numpy": ("https://numpy.org/doc/stable/", None),
"torch": ("https://pytorch.org/docs/stable/", None),
"isaac": ("https://docs.omniverse.nvidia.com/py/isaacsim", None),
"gymnasium": ("https://gymnasium.farama.org/", None),
"warp": ("https://nvidia.github.io/warp/", None),
}
# Add any paths that contain templates here, relative to this directory.
templates_path = []
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", "README.md", "licenses/*"]
# Mock out modules that are not available on RTD
autodoc_mock_imports = [
"torch",
"numpy",
"matplotlib",
"scipy",
"carb",
"warp",
"pxr",
"omni.kit",
"omni.usd",
"omni.client",
"omni.physx",
"omni.physics",
"pxr.PhysxSchema",
"pxr.PhysicsSchemaTools",
"omni.replicator",
"omni.isaac.core",
"omni.isaac.kit",
"omni.isaac.cloner",
"omni.isaac.urdf",
"omni.isaac.version",
"omni.isaac.motion_generation",
"omni.isaac.ui",
"omni.syntheticdata",
"omni.timeline",
"omni.ui",
"gym",
"skrl",
"stable_baselines3",
"rsl_rl",
"rl_games",
"ray",
"h5py",
"hid",
"prettytable",
"tqdm",
"tensordict",
"trimesh",
"toml",
]
# List of zero or more Sphinx-specific warning categories to be squelched (i.e.,
# suppressed, ignored).
suppress_warnings = [
# FIXME: *THIS IS TERRIBLE.* Generally speaking, we do want Sphinx to inform
# us about cross-referencing failures. Remove this hack entirely after Sphinx
# resolves this open issue:
# https://github.com/sphinx-doc/sphinx/issues/4961
# Squelch mostly ignorable warnings resembling:
# WARNING: more than one target found for cross-reference 'TypeHint':
# beartype.door._doorcls.TypeHint, beartype.door.TypeHint
#
# Sphinx currently emits *MANY* of these warnings against our
# documentation. All of these warnings appear to be ignorable. Although we
# could explicitly squelch *SOME* of these warnings by canonicalizing
# relative to absolute references in docstrings, Sphinx emits still others
# of these warnings when parsing PEP-compliant type hints via static
# analysis. Since those hints are actual hints that *CANNOT* by definition
# by canonicalized, our only recourse is to squelch warnings altogether.
"ref.python",
]
# -- Internationalization ----------------------------------------------------
# specifying the natural language populates some key tags
language = "en"
# -- Options for HTML output -------------------------------------------------
import sphinx_book_theme
html_title = "orbit documentation"
html_theme_path = [sphinx_book_theme.get_html_theme_path()]
html_theme = "sphinx_book_theme"
html_favicon = "source/_static/favicon.ico"
html_show_copyright = True
html_show_sphinx = False
html_last_updated_fmt = "" # to reveal the build date in the pages meta
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["source/_static/css"]
html_css_files = ["custom.css"]
html_theme_options = {
"collapse_navigation": True,
"repository_url": "https://github.com/NVIDIA-Omniverse/Orbit",
"announcement": "We have now released v0.2.0! Please use the latest version for the best experience.",
"use_repository_button": True,
"use_issues_button": True,
"use_edit_page_button": True,
"show_toc_level": 1,
"use_sidenotes": True,
"logo": {
"text": "orbit documentation",
"image_light": "source/_static/NVIDIA-logo-white.png",
"image_dark": "source/_static/NVIDIA-logo-black.png",
},
"icon_links": [
{
"name": "GitHub",
"url": "https://github.com/NVIDIA-Omniverse/Orbit",
"icon": "fa-brands fa-square-github",
"type": "fontawesome",
},
{
"name": "Isaac Sim",
"url": "https://developer.nvidia.com/isaac-sim",
"icon": "https://img.shields.io/badge/IsaacSim-2023.1.1-silver.svg",
"type": "url",
},
{
"name": "Stars",
"url": "https://img.shields.io/github/stars/NVIDIA-Omniverse/Orbit?color=fedcba",
"icon": "https://img.shields.io/github/stars/NVIDIA-Omniverse/Orbit?color=fedcba",
"type": "url",
},
],
"icon_links_label": "Quick Links",
}
html_sidebars = {"**": ["navbar-logo.html", "icon-links.html", "search-field.html", "sbt-sidebar-nav.html"]}
# -- Advanced configuration -------------------------------------------------
def skip_member(app, what, name, obj, skip, options):
# List the names of the functions you want to skip here
exclusions = ["from_dict", "to_dict", "replace", "copy", "__post_init__"]
if name in exclusions:
return True
return None
def setup(app):
app.connect("autodoc-skip-member", skip_member)
| 8,492 | Python | 32.175781 | 108 | 0.644842 |
NVIDIA-Omniverse/orbit/docs/source/how-to/save_camera_output.rst | .. _how-to-save-images-and-3d-reprojection:
Saving rendered images and 3D re-projection
===========================================
.. currentmodule:: omni.isaac.orbit
This guide accompanied with the ``run_usd_camera.py`` script in the ``orbit/source/standalone/tutorials/04_sensors``
directory.
.. dropdown:: Code for run_usd_camera.py
:icon: code
.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:emphasize-lines: 171-179, 229-247, 251-264
:linenos:
Saving using Replicator Basic Writer
------------------------------------
To save camera outputs, we use the basic write class from Omniverse Replicator. This class allows us to save the
images in a numpy format. For more information on the basic writer, please check the
`documentation <https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/writer_examples.html>`_.
.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:start-at: rep_writer = rep.BasicWriter(
:end-before: # Camera positions, targets, orientations
While stepping the simulator, the images can be saved to the defined folder. Since the BasicWriter only supports
saving data using NumPy format, we first need to convert the PyTorch sensors to NumPy arrays before packing
them in a dictionary.
.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:start-at: # Save images from camera at camera_index
:end-at: single_cam_info = camera.data.info[camera_index]
After this step, we can save the images using the BasicWriter.
.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:start-at: # Pack data back into replicator format to save them using its writer
:end-at: rep_writer.write(rep_output)
Projection into 3D Space
------------------------
We include utilities to project the depth image into 3D Space. The re-projection operations are done using
PyTorch operations which allows faster computation.
.. code-block:: python
from omni.isaac.orbit.utils.math import transform_points, unproject_depth
# Pointcloud in world frame
points_3d_cam = unproject_depth(
camera.data.output["distance_to_image_plane"], camera.data.intrinsic_matrices
)
points_3d_world = transform_points(points_3d_cam, camera.data.pos_w, camera.data.quat_w_ros)
Alternately, we can use the :meth:`omni.isaac.orbit.sensors.camera.utils.create_pointcloud_from_depth` function
to create a point cloud from the depth image and transform it to the world frame.
.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:start-at: # Derive pointcloud from camera at camera_index
:end-before: # In the first few steps, things are still being instanced and Camera.data
The resulting point cloud can be visualized using the :mod:`omni.isaac.debug_draw` extension from Isaac Sim.
This makes it easy to visualize the point cloud in the 3D space.
.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:start-at: # In the first few steps, things are still being instanced and Camera.data
:end-at: pc_markers.visualize(translations=pointcloud)
Executing the script
--------------------
To run the accompanying script, execute the following command:
.. code-block:: bash
# Usage with saving and drawing
./orbit.sh -p source/standalone/tutorials/04_sensors/run_usd_camera.py --save --draw
# Usage with saving only in headless mode
./orbit.sh -p source/standalone/tutorials/04_sensors/run_usd_camera.py --save --headless --offscreen_render
The simulation should start, and you can observe different objects falling down. An output folder will be created
in the ``orbit/source/standalone/tutorials/04_sensors`` directory, where the images will be saved. Additionally,
you should see the point cloud in the 3D space drawn on the viewport.
To stop the simulation, close the window, press the ``STOP`` button in the UI, or use ``Ctrl+C`` in the terminal.
| 4,175 | reStructuredText | 39.543689 | 116 | 0.728383 |
NVIDIA-Omniverse/orbit/docs/source/how-to/wrap_rl_env.rst | .. _how-to-env-wrappers:
Wrapping environments
=====================
.. currentmodule:: omni.isaac.orbit
Environment wrappers are a way to modify the behavior of an environment without modifying the environment itself.
This can be used to apply functions to modify observations or rewards, record videos, enforce time limits, etc.
A detailed description of the API is available in the :class:`gymnasium.Wrapper` class.
At present, all RL environments inheriting from the :class:`~envs.RLTaskEnv` class
are compatible with :class:`gymnasium.Wrapper`, since the base class implements the :class:`gymnasium.Env` interface.
In order to wrap an environment, you need to first initialize the base environment. After that, you can
wrap it with as many wrappers as you want by calling ``env = wrapper(env, *args, **kwargs)`` repeatedly.
For example, here is how you would wrap an environment to enforce that reset is called before step or render:
.. code-block:: python
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher
# launch omniverse app in headless mode
app_launcher = AppLauncher(headless=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
import omni.isaac.orbit_tasks # noqa: F401
from omni.isaac.orbit_tasks.utils import load_cfg_from_registry
# create base environment
cfg = load_cfg_from_registry("Isaac-Reach-Franka-v0", "env_cfg_entry_point")
env = gym.make("Isaac-Reach-Franka-v0", cfg=cfg)
# wrap environment to enforce that reset is called before step
env = gym.wrappers.OrderEnforcing(env)
Wrapper for recording videos
----------------------------
The :class:`gymnasium.wrappers.RecordVideo` wrapper can be used to record videos of the environment.
The wrapper takes a ``video_dir`` argument, which specifies where to save the videos. The videos are saved in
`mp4 <https://en.wikipedia.org/wiki/MP4_file_format>`__ format at specified intervals for specified
number of environment steps or episodes.
To use the wrapper, you need to first install ``ffmpeg``. On Ubuntu, you can install it by running:
.. code-block:: bash
sudo apt-get install ffmpeg
.. attention::
By default, when running an environment in headless mode, the Omniverse viewport is disabled. This is done to
improve performance by avoiding unnecessary rendering.
We notice the following performance in different rendering modes with the ``Isaac-Reach-Franka-v0`` environment
using an RTX 3090 GPU:
* No GUI execution without off-screen rendering enabled: ~65,000 FPS
* No GUI execution with off-screen enabled: ~57,000 FPS
* GUI execution with full rendering: ~13,000 FPS
The viewport camera used for rendering is the default camera in the scene called ``"/OmniverseKit_Persp"``.
The camera's pose and image resolution can be configured through the
:class:`~envs.ViewerCfg` class.
.. dropdown:: Default parameters of the ViewerCfg class:
:icon: code
.. literalinclude:: ../../../source/extensions/omni.isaac.orbit/omni/isaac/orbit/envs/base_env_cfg.py
:language: python
:pyobject: ViewerCfg
After adjusting the parameters, you can record videos by wrapping the environment with the
:class:`gymnasium.wrappers.RecordVideo` wrapper and enabling the off-screen rendering
flag. Additionally, you need to specify the render mode of the environment as ``"rgb_array"``.
As an example, the following code records a video of the ``Isaac-Reach-Franka-v0`` environment
for 200 steps, and saves it in the ``videos`` folder at a step interval of 1500 steps.
.. code:: python
"""Launch Isaac Sim Simulator first."""
from omni.isaac.orbit.app import AppLauncher
# launch omniverse app in headless mode with off-screen rendering
app_launcher = AppLauncher(headless=True, offscreen_render=True)
simulation_app = app_launcher.app
"""Rest everything follows."""
import gymnasium as gym
# adjust camera resolution and pose
env_cfg.viewer.resolution = (640, 480)
env_cfg.viewer.eye = (1.0, 1.0, 1.0)
env_cfg.viewer.lookat = (0.0, 0.0, 0.0)
# create isaac-env instance
# set render mode to rgb_array to obtain images on render calls
env = gym.make(task_name, cfg=env_cfg, render_mode="rgb_array")
# wrap for video recording
video_kwargs = {
"video_folder": "videos",
"step_trigger": lambda step: step % 1500 == 0,
"video_length": 200,
}
env = gym.wrappers.RecordVideo(env, **video_kwargs)
Wrapper for learning frameworks
-------------------------------
Every learning framework has its own API for interacting with environments. For example, the
`Stable-Baselines3`_ library uses the `gym.Env <https://gymnasium.farama.org/api/env/>`_
interface to interact with environments. However, libraries like `RL-Games`_ or `RSL-RL`_
use their own API for interfacing with a learning environments. Since there is no one-size-fits-all
solution, we do not base the :class:`~envs.RLTaskEnv` class on any particular learning framework's
environment definition. Instead, we implement wrappers to make it compatible with the learning
framework's environment definition.
As an example of how to use the RL task environment with Stable-Baselines3:
.. code:: python
from omni.isaac.orbit_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper
# create isaac-env instance
env = gym.make(task_name, cfg=env_cfg)
# wrap around environment for stable baselines
env = Sb3VecEnvWrapper(env)
.. caution::
Wrapping the environment with the respective learning framework's wrapper should happen in the end,
i.e. after all other wrappers have been applied. This is because the learning framework's wrapper
modifies the interpretation of environment's APIs which may no longer be compatible with :class:`gymnasium.Env`.
Adding new wrappers
-------------------
All new wrappers should be added to the :mod:`omni.isaac.orbit_tasks.utils.wrappers` module.
They should check that the underlying environment is an instance of :class:`omni.isaac.orbit.envs.RLTaskEnv`
before applying the wrapper. This can be done by using the :func:`unwrapped` property.
We include a set of wrappers in this module that can be used as a reference to implement your own wrappers.
If you implement a new wrapper, please consider contributing it to the framework by opening a pull request.
.. _Stable-Baselines3: https://stable-baselines3.readthedocs.io/en/master/
.. _RL-Games: https://github.com/Denys88/rl_games
.. _RSL-RL: https://github.com/leggedrobotics/rsl_rl
| 6,619 | reStructuredText | 38.879518 | 117 | 0.735005 |
NVIDIA-Omniverse/orbit/docs/source/how-to/draw_markers.rst | Creating Visualization Markers
==============================
.. currentmodule:: omni.isaac.orbit
Visualization markers are useful to debug the state of the environment. They can be used to visualize
the frames, commands, and other information in the simulation.
While Isaac Sim provides its own :mod:`omni.isaac.debug_draw` extension, it is limited to rendering only
points, lines and splines. For cases, where you need to render more complex shapes, you can use the
:class:`markers.VisualizationMarkers` class.
This guide is accompanied by a sample script ``markers.py`` in the ``orbit/source/standalone/demos`` directory.
.. dropdown:: Code for markers.py
:icon: code
.. literalinclude:: ../../../source/standalone/demos/markers.py
:language: python
:emphasize-lines: 49-97, 112-113, 142-148
:linenos:
Configuring the markers
-----------------------
The :class:`~markers.VisualizationMarkersCfg` class provides a simple interface to configure
different types of markers. It takes in the following parameters:
- :attr:`~markers.VisualizationMarkersCfg.prim_path`: The corresponding prim path for the marker class.
- :attr:`~markers.VisualizationMarkersCfg.markers`: A dictionary specifying the different marker prototypes
handled by the class. The key is the name of the marker prototype and the value is its spawn configuration.
.. note::
In case the marker prototype specifies a configuration with physics properties, these are removed.
This is because the markers are not meant to be simulated.
Here we show all the different types of markers that can be configured. These range from simple shapes like
cones and spheres to more complex geometries like a frame or arrows. The marker prototypes can also be
configured from USD files.
.. literalinclude:: ../../../source/standalone/demos/markers.py
:language: python
:lines: 49-97
:dedent:
Drawing the markers
-------------------
To draw the markers, we call the :class:`~markers.VisualizationMarkers.visualize` method. This method takes in
as arguments the pose of the markers and the corresponding marker prototypes to draw.
.. literalinclude:: ../../../source/standalone/demos/markers.py
:language: python
:lines: 142-148
:dedent:
Executing the Script
--------------------
To run the accompanying script, execute the following command:
.. code-block:: bash
./orbit.sh -p source/standalone/demos/markers.py
The simulation should start, and you can observe the different types of markers arranged in a grid pattern.
The markers will rotating around their respective axes. Additionally every few rotations, they will
roll forward on the grid.
To stop the simulation, close the window, or use ``Ctrl+C`` in the terminal.
| 2,751 | reStructuredText | 35.210526 | 111 | 0.739731 |
NVIDIA-Omniverse/orbit/docs/source/how-to/import_new_asset.rst | Importing a New Asset
=====================
.. currentmodule:: omni.isaac.orbit
NVIDIA Omniverse relies on the Universal Scene Description (USD) file format to
import and export assets. USD is an open source file format developed by Pixar
Animation Studios. It is a scene description format optimized for large-scale,
complex data sets. While this format is widely used in the film and animation
industry, it is less common in the robotics community.
To this end, NVIDIA has developed various importers that allow you to import
assets from other file formats into USD. These importers are available as
extensions to Omniverse Kit:
* **URDF Importer** - Import assets from URDF files.
* **MJCF Importer** - Import assets from MJCF files.
* **Asset Importer** - Import assets from various file formats, including
OBJ, FBX, STL, and glTF.
The recommended workflow from NVIDIA is to use the above importers to convert
the asset into its USD representation. Once the asset is in USD format, you can
use the Omniverse Kit to edit the asset and export it to other file formats.
An important note to use assets for large-scale simulation is to ensure that they
are in `instanceable`_ format. This allows the asset to be efficiently loaded
into memory and used multiple times in a scene. Otherwise, the asset will be
loaded into memory multiple times, which can cause performance issues.
For more details on instanceable assets, please check the Isaac Sim `documentation`_.
Using URDF Importer
-------------------
Isaac Sim includes the URDF and MJCF importers by default. These importers support the
option to import assets as instanceable assets. By selecting this option, the
importer will create two USD files: one for all the mesh data and one for
all the non-mesh data (e.g. joints, rigid bodies, etc.). The prims in the mesh data file are
referenced in the non-mesh data file. This allows the mesh data (which is often bulky) to be
loaded into memory only once and used multiple times in a scene.
For using these importers from the GUI, please check the documentation for `MJCF importer`_ and
`URDF importer`_ respectively.
For using the URDF importers from Python scripts, we include a utility tool called ``convert_urdf.py``.
Internally, this script creates an instance of :class:`~sim.converters.UrdfConverterCfg` which
is then passed to the :class:`~sim.converters.UrdfConverter` class. The configuration class specifies
the default values for the importer. The important settings are:
* :attr:`~sim.converters.UrdfConverterCfg.fix_base` - Whether to fix the base of the robot.
This depends on whether you have a floating-base or fixed-base robot.
* :attr:`~sim.converters.UrdfConverterCfg.make_instanceable` - Whether to create instanceable assets.
Usually, this should be set to ``True``.
* :attr:`~sim.converters.UrdfConverterCfg.merge_fixed_joints` - Whether to merge the fixed joints.
Usually, this should be set to ``True`` to reduce the asset complexity.
* :attr:`~sim.converters.UrdfConverterCfg.default_drive_type` - The drive-type for the joints.
We recommend this to always be ``"none"``. This allows changing the drive configuration using the
actuator models.
* :attr:`~sim.converters.UrdfConverterCfg.default_drive_stiffness` - The drive stiffness for the joints.
We recommend this to always be ``0.0``. This allows changing the drive configuration using the
actuator models.
* :attr:`~sim.converters.UrdfConverterCfg.default_drive_damping` - The drive damping for the joints.
Similar to the stiffness, we recommend this to always be ``0.0``.
Example Usage
~~~~~~~~~~~~~
In this example, we use the pre-processed URDF file of the ANYmal-D robot. To check the
pre-process URDF, please check the file the `anymal.urdf`_. The main difference between the
pre-processed URDF and the original URDF are:
* We removed the ``<gazebo>`` tag from the URDF. This tag is not supported by the URDF importer.
* We removed the ``<transmission>`` tag from the URDF. This tag is not supported by the URDF importer.
* We removed various collision bodies from the URDF to reduce the complexity of the asset.
* We changed all the joint's damping and friction parameters to ``0.0``. This ensures that we can perform
effort-control on the joints without PhysX adding additional damping.
* We added the ``<dont_collapse>`` tag to fixed joints. This ensures that the importer does
not merge these fixed joints.
The following shows the steps to clone the repository and run the converter:
.. code-block:: bash
# create a directory to clone
mkdir ~/git && cd ~/git
# clone a repository with URDF files
git clone [email protected]:isaac-orbit/anymal_d_simple_description.git
# go to top of the repository
cd /path/to/orbit
# run the converter
./orbit.sh -p source/standalone/tools/convert_urdf.py \
~/git/anymal_d_simple_description/urdf/anymal.urdf \
source/extensions/omni.isaac.orbit_assets/data/Robots/ANYbotics/anymal_d.usd \
--merge-joints \
--make-instanceable
Executing the above script will create two USD files inside the
``source/extensions/omni.isaac.orbit_assets/data/Robots/ANYbotics/`` directory:
* ``anymal_d.usd`` - This is the main asset file. It contains all the non-mesh data.
* ``Props/instanceable_assets.usd`` - This is the mesh data file.
.. note::
Since Isaac Sim 2023.1.1, the URDF importer behavior has changed and it stores the mesh data inside the
main asset file even if the ``--make-instanceable`` flag is set. This means that the
``Props/instanceable_assets.usd`` file is created but not used anymore.
You can press play on the opened window to see the asset in the scene. The asset should "collapse"
if everything is working correctly. If it blows up, then it might be that you have self-collisions
present in the URDF.
To run the script headless, you can add the ``--headless`` flag. This will not open the GUI and
exit the script after the conversion is complete.
Using Mesh Importer
-------------------
Omniverse Kit includes the mesh converter tool that uses the ASSIMP library to import assets
from various mesh formats (e.g. OBJ, FBX, STL, glTF, etc.). The asset converter tool is available
as an extension to Omniverse Kit. Please check the `asset converter`_ documentation for more details.
However, unlike Isaac Sim's URDF and MJCF importers, the asset converter tool does not support
creating instanceable assets. This means that the asset will be loaded into memory multiple times
if it is used multiple times in a scene.
Thus, we include a utility tool called ``convert_mesh.py`` that uses the asset converter tool to
import the asset and then converts it into an instanceable asset. Internally, this script creates
an instance of :class:`~sim.converters.MeshConverterCfg` which is then passed to the
:class:`~sim.converters.MeshConverter` class. Since the mesh file does not contain any physics
information, the configuration class accepts different physics properties (such as mass, collision
shape, etc.) as input. Please check the documentation for :class:`~sim.converters.MeshConverterCfg`
for more details.
Example Usage
~~~~~~~~~~~~~
We use an OBJ file of a cube to demonstrate the usage of the mesh converter. The following shows
the steps to clone the repository and run the converter:
.. code-block:: bash
# create a directory to clone
mkdir ~/git && cd ~/git
# clone a repository with URDF files
git clone [email protected]:NVIDIA-Omniverse/IsaacGymEnvs.git
# go to top of the repository
cd /path/to/orbit
# run the converter
./orbit.sh -p source/standalone/tools/convert_mesh.py \
~/git/IsaacGymEnvs/assets/trifinger/objects/meshes/cube_multicolor.obj \
source/extensions/omni.isaac.orbit_assets/data/Props/CubeMultiColor/cube_multicolor.usd \
--make-instanceable \
--collision-approximation convexDecomposition \
--mass 1.0
Similar to the URDF converter, executing the above script will create two USD files inside the
``source/extensions/omni.isaac.orbit_assets/data/Props/CubeMultiColor/`` directory. Additionally,
if you press play on the opened window, you should see the asset fall down under the influence
of gravity.
* If you do not set the ``--mass`` flag, then no rigid body properties will be added to the asset.
It will be imported as a static asset.
* If you also do not set the ``--collision-approximation`` flag, then the asset will not have any collider
properties as well and will be imported as a visual asset.
.. _instanceable: https://openusd.org/dev/api/_usd__page__scenegraph_instancing.html
.. _documentation: https://docs.omniverse.nvidia.com/isaacsim/latest/isaac_gym_tutorials/tutorial_gym_instanceable_assets.html
.. _MJCF importer: https://docs.omniverse.nvidia.com/isaacsim/latest/advanced_tutorials/tutorial_advanced_import_mjcf.html
.. _URDF importer: https://docs.omniverse.nvidia.com/isaacsim/latest/advanced_tutorials/tutorial_advanced_import_urdf.html
.. _anymal.urdf: https://github.com/isaac-orbit/anymal_d_simple_description/blob/master/urdf/anymal.urdf
.. _asset converter: https://docs.omniverse.nvidia.com/extensions/latest/ext_asset-converter.html
| 9,167 | reStructuredText | 50.79661 | 126 | 0.765245 |
NVIDIA-Omniverse/orbit/docs/source/how-to/master_omniverse.rst | Mastering Omniverse for Robotics
================================
NVIDIA Omniverse offers a large suite of tools for 3D content workflows.
There are three main components (relevant to robotics) in Omniverse:
- **USD Composer**: This is based on a novel file format (Universal Scene
Description) from the animation (originally Pixar) community that is
used in Omniverse
- **PhysX SDK**: This is the main physics engine behind Omniverse that
leverages GPU-based parallelization for massive scenes
- **RTX-enabled Renderer**: This uses ray-tracing kernels in NVIDIA RTX
GPUs for real-time physically-based rendering
Of these, the first two require a deeper understanding to start working
with Omniverse and its constituent applications (Isaac Sim and Orbit).
The main things to learn:
- How to use the Composer GUI efficiently?
- What are USD prims and schemas?
- How do you compose a USD scene?
- What is the difference between references and payloads in USD?
- What is meant by scene-graph instancing?
- How to apply PhysX schemas on prims? What all schemas are possible?
- How to write basic operations in USD for creating prims and modifying
their attributes?
Part 1: Using USD Composer
--------------------------
While several `video
tutorials <https://www.youtube.com/@NVIDIA-Studio>`__ and
`documentation <https://docs.omniverse.nvidia.com/>`__ exist
out there on NVIDIA Omniverse, going through all of them would take an
extensive amount of time and effort. Thus, we have curated these
resources to guide you through using Omniverse, specifically for
robotics.
Introduction to Omniverse and USD
- `What is NVIDIA Omniverse? <https://youtu.be/dvdB-ndYJBM>`__
- `What is the USD File Type? \| Getting Started in NVIDIA Omniverse <https://youtu.be/GOdyx-oSs2M>`__
- `What Makes USD Unique in NVIDIA Omniverse <https://youtu.be/o2x-30-PTkw>`__
Using Omniverse USD Composer
- `Introduction to Omniverse USD Composer <https://youtu.be/_30Pf3nccuE>`__
- `Navigation Basics in Omniverse USD Composer <https://youtu.be/kb4ZA3TyMak>`__
- `Lighting Basics in NVIDIA Omniverse USD Composer <https://youtu.be/c7qyI8pZvF4>`__
- `Rendering Overview in NVIDIA Omniverse USD Composer <https://youtu.be/dCvq2ZyYmu4>`__
Materials and MDL
- `Five Things to Know About Materials in NVIDIA Omniverse <https://youtu.be/C0HmcQXaENc>`__
- `How to apply materials? <https://docs.omniverse.nvidia.com/materials-and-rendering/latest/materials.html%23applying-materials>`__
Omniverse Physics and PhysX SDK
- `Basics - Setting Up Physics and Toolbar Overview <https://youtu.be/nsJ0S9MycJI>`__
- `Basics - Demos Overview <https://youtu.be/-y0-EVTj10s>`__
- `Rigid Bodies - Mass Editing <https://youtu.be/GHl2RwWeRuM>`__
- `Materials - Friction Restitution and Defaults <https://youtu.be/oTW81DltNiE>`__
- `Overview of Simulation Ready Assets Physics in Omniverse <https://youtu.be/lFtEMg86lJc>`__
Importing assets
- `Omniverse Create - Importing FBX Files \| NVIDIA Omniverse Tutorials <https://youtu.be/dQI0OpzfVHw>`__
- `Omniverse Asset Importer <https://docs.omniverse.nvidia.com/extensions/latest/ext_asset-importer.html>`__
- `Isaac Sim URDF impoter <https://docs.omniverse.nvidia.com/isaacsim/latest/ext_omni_isaac_urdf.html>`__
Part 2: Scripting in Omniverse
------------------------------
The above links mainly introduced how to use the USD Composer and its
functionalities through UI operations. However, often developers
need to write scripts to perform operations. This is especially true
when you want to automate certain tasks or create custom applications
that use Omniverse as a backend. This section will introduce you to
scripting in Omniverse.
USD is the main file format Omniverse operates with. So naturally, the
APIs (from OpenUSD) for modifying USD are at the core of Omniverse.
Most of the APIs are in C++ and Python bindings are provided for them.
Thus, to script in Omniverse, you need to understand the USD APIs.
.. note::
While Isaac Sim and Orbit try to "relieve" users from understanding
the core USD concepts and APIs, understanding these basics still
help a lot once you start diving inside the codebase and modifying
it for your own application.
Before diving into USD scripting, it is good to get acquainted with the
terminologies used in USD. We recommend the following `introduction to
USD basics <https://www.sidefx.com/docs/houdini/solaris/usd.html>`__ by
Houdini, which is a 3D animation software.
Make sure to go through the following sections:
- `Quick example <https://www.sidefx.com/docs/houdini/solaris/usd.html%23quick-example>`__
- `Attributes and primvars <https://www.sidefx.com/docs/houdini/solaris/usd.html%23attrs>`__
- `Composition <https://www.sidefx.com/docs/houdini/solaris/usd.html%23compose>`__
- `Schemas <https://www.sidefx.com/docs/houdini/solaris/usd.html%23schemas>`__
- `Instances <https://www.sidefx.com/docs/houdini/solaris/usd.html%23instancing>`__
and `Scene-graph Instancing <https://openusd.org/dev/api/_usd__page__scenegraph_instancing.html>`__
As a test of understanding, make sure you can answer the following:
- What are prims? What is meant by a prim path in a stage?
- How are attributes related to prims?
- How are schemas related to prims?
- What is the difference between attributes and schemas?
- What is asset instancing?
Part 3: More Resources
----------------------
- `Omniverse Glossary of Terms <https://docs.omniverse.nvidia.com/isaacsim/latest/common/glossary-of-terms.html>`__
- `Omniverse Code Samples <https://docs.omniverse.nvidia.com/dev-guide/latest/programmer_ref.html>`__
- `PhysX Collider Compatibility <https://docs.omniverse.nvidia.com/extensions/latest/ext_physics/rigid-bodies.html#collidercompatibility>`__
- `PhysX Limitations <https://docs.omniverse.nvidia.com/isaacsim/latest/features/physics/physX_limitations.html>`__
- `PhysX Documentation <https://nvidia-omniverse.github.io/PhysX/physx/>`__.
| 5,971 | reStructuredText | 46.776 | 140 | 0.748786 |
NVIDIA-Omniverse/orbit/docs/source/how-to/write_articulation_cfg.rst | .. _how-to-write-articulation-config:
Writing an Asset Configuration
==============================
.. currentmodule:: omni.isaac.orbit
This guide walks through the process of creating an :class:`~assets.ArticulationCfg`.
The :class:`~assets.ArticulationCfg` is a configuration object that defines the
properties of an :class:`~assets.Articulation` in Orbit.
.. note::
While we only cover the creation of an :class:`~assets.ArticulationCfg` in this guide,
the process is similar for creating any other asset configuration object.
We will use the Cartpole example to demonstrate how to create an :class:`~assets.ArticulationCfg`.
The Cartpole is a simple robot that consists of a cart with a pole attached to it. The cart
is free to move along a rail, and the pole is free to rotate about the cart.
.. dropdown:: Code for Cartpole configuration
:icon: code
.. literalinclude:: ../../../source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/cartpole.py
:language: python
:linenos:
Defining the spawn configuration
--------------------------------
As explained in :ref:`tutorial-spawn-prims` tutorials, the spawn configuration defines
the properties of the assets to be spawned. This spawning may happen procedurally, or
through an existing asset file (e.g. USD or URDF). In this example, we will spawn the
Cartpole from a USD file.
When spawning an asset from a USD file, we define its :class:`~sim.spawners.from_files.UsdFileCfg`.
This configuration object takes in the following parameters:
* :class:`~sim.spawners.from_files.UsdFileCfg.usd_path`: The USD file path to spawn from
* :class:`~sim.spawners.from_files.UsdFileCfg.rigid_props`: The properties of the articulation's root
* :class:`~sim.spawners.from_files.UsdFileCfg.articulation_props`: The properties of all the articulation's links
The last two parameters are optional. If not specified, they are kept at their default values in the USD file.
.. literalinclude:: ../../../source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/cartpole.py
:language: python
:lines: 17-33
:dedent:
To import articulation from a URDF file instead of a USD file, you can replace the
:class:`~sim.spawners.from_files.UsdFileCfg` with a :class:`~sim.spawners.from_files.UrdfFileCfg`.
For more details, please check the API documentation.
Defining the initial state
--------------------------
Every asset requires defining their initial or *default* state in the simulation through its configuration.
This configuration is stored into the asset's default state buffers that can be accessed when the asset's
state needs to be reset.
.. note::
The initial state of an asset is defined w.r.t. its local environment frame. This then needs to
be transformed into the global simulation frame when resetting the asset's state. For more
details, please check the :ref:`tutorial-interact-articulation` tutorial.
For an articulation, the :class:`~assets.ArticulationCfg.InitialStateCfg` object defines the
initial state of the root of the articulation and the initial state of all its joints. In this
example, we will spawn the Cartpole at the origin of the XY plane at a Z height of 2.0 meters.
Meanwhile, the joint positions and velocities are set to 0.0.
.. literalinclude:: ../../../source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/cartpole.py
:language: python
:lines: 34-36
:dedent:
Defining the actuator configuration
-----------------------------------
Actuators are a crucial component of an articulation. Through this configuration, it is possible
to define the type of actuator model to use. We can use the internal actuator model provided by
the physics engine (i.e. the implicit actuator model), or use a custom actuator model which is
governed by a user-defined system of equations (i.e. the explicit actuator model).
For more details on actuators, see :ref:`feature-actuators`.
The cartpole's articulation has two actuators, one corresponding to its each joint:
``cart_to_pole`` and ``slider_to_cart``. We use two different actuator models for these actuators as
an example. However, since they are both using the same actuator model, it is possible
to combine them into a single actuator model.
.. dropdown:: Actuator model configuration with separate actuator models
:icon: code
.. literalinclude:: ../../../source/extensions/omni.isaac.orbit_assets/omni/isaac/orbit_assets/cartpole.py
:language: python
:lines: 37-47
:dedent:
.. dropdown:: Actuator model configuration with a single actuator model
:icon: code
.. code-block:: python
actuators={
"all_joints": ImplicitActuatorCfg(
joint_names_expr=[".*"],
effort_limit=400.0,
velocity_limit=100.0,
stiffness={"slider_to_cart": 0.0, "cart_to_pole": 0.0},
damping={"slider_to_cart": 10.0, "cart_to_pole": 0.0},
),
},
| 4,957 | reStructuredText | 41.376068 | 113 | 0.727053 |
NVIDIA-Omniverse/orbit/docs/source/how-to/record_animation.rst | Recording Animations of Simulations
===================================
.. currentmodule:: omni.isaac.orbit
Omniverse includes tools to record animations of physics simulations. The `Stage Recorder`_ extension
listens to all the motion and USD property changes within a USD stage and records them to a USD file.
This file contains the time samples of the changes, which can be played back to render the animation.
The timeSampled USD file only contains the changes to the stage. It uses the same hierarchy as the original
stage at the time of recording. This allows adding the animation to the original stage, or to a different
stage with the same hierarchy. The timeSampled file can be directly added as a sublayer to the original stage
to play back the animation.
.. note::
Omniverse only supports playing animation or playing physics on a USD prim at the same time. If you want to
play back the animation of a USD prim, you need to disable the physics simulation on the prim.
In Orbit, we directly use the `Stage Recorder`_ extension to record the animation of the physics simulation.
This is available as a feature in the :class:`~omni.isaac.orbit.envs.ui.BaseEnvWindow` class.
However, to record the animation of a simulation, you need to disable `Fabric`_ to allow reading and writing
all the changes (such as motion and USD properties) to the USD stage.
Stage Recorder Settings
~~~~~~~~~~~~~~~~~~~~~~~
Orbit integration of the `Stage Recorder`_ extension assumes certain default settings. If you want to change the
settings, you can directly use the `Stage Recorder`_ extension in the Omniverse Create application.
.. dropdown:: Settings used in base_env_window.py
:icon: code
.. literalinclude:: ../../../source/extensions/omni.isaac.orbit/omni/isaac/orbit/envs/ui/base_env_window.py
:language: python
:linenos:
:pyobject: BaseEnvWindow._toggle_recording_animation_fn
Example Usage
~~~~~~~~~~~~~
In all environment standalone scripts, Fabric can be disabled by passing the ``--disable_fabric`` flag to the script.
Here we run the state-machine example and record the animation of the simulation.
.. code-block:: bash
./orbit.sh -p source/standalone/environments/state_machine/lift_cube_sm.py --num_envs 8 --cpu --disable_fabric
On running the script, the Orbit UI window opens with the button "Record Animation" in the toolbar.
Clicking this button starts recording the animation of the simulation. On clicking the button again, the
recording stops. The recorded animation and the original stage (with all physics disabled) are saved
to the ``recordings`` folder in the current working directory. The files are stored in the ``usd`` format:
- ``Stage.usd``: The original stage with all physics disabled
- ``TimeSample_tk001.usd``: The timeSampled file containing the recorded animation
You can open Omniverse Isaac Sim application to play back the animation. There are many ways to launch
the application (such as from terminal or `Omniverse Launcher`_). Here we use the terminal to open the
application and play the animation.
.. code-block:: bash
./orbit.sh -s # Opens Isaac Sim application through _isaac_sim/isaac-sim.sh
On a new stage, add the ``Stage.usd`` as a sublayer and then add the ``TimeSample_tk001.usd`` as a sublayer.
You can do this by dragging and dropping the files from the file explorer to the stage. Please check out
the `tutorial on layering in Omniverse`_ for more details.
You can then play the animation by pressing the play button.
.. _Stage Recorder: https://docs.omniverse.nvidia.com/extensions/latest/ext_animation_stage-recorder.html
.. _Fabric: https://docs.omniverse.nvidia.com/kit/docs/usdrt/latest/docs/usd_fabric_usdrt.html
.. _Omniverse Launcher: https://docs.omniverse.nvidia.com/launcher/latest/index.html
.. _tutorial on layering in Omniverse: https://www.youtube.com/watch?v=LTwmNkSDh-c&ab_channel=NVIDIAOmniverse
| 3,914 | reStructuredText | 48.556961 | 117 | 0.762902 |
NVIDIA-Omniverse/orbit/docs/source/how-to/index.rst | How-to Guides
=============
This section includes guides that help you use Orbit. These are intended for users who
have already worked through the tutorials and are looking for more information on how to
use Orbit. If you are new to Orbit, we recommend you start with the tutorials.
.. note::
This section is a work in progress. If you have a question that is not answered here,
please open an issue on our `GitHub page <https://github.com/NVIDIA-Omniverse/Orbit>`_.
.. toctree::
:maxdepth: 1
import_new_asset
write_articulation_cfg
save_camera_output
draw_markers
wrap_rl_env
master_omniverse
record_animation
| 656 | reStructuredText | 27.565216 | 91 | 0.716463 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/index.rst | Tutorials
=========
Welcome to the Orbit tutorials! These tutorials provide a step-by-step guide to help you understand
and use various features of the framework. All the tutorials are written as Python scripts. You can
find the source code for each tutorial in the ``source/standalone/tutorials`` directory of the Orbit
repository.
.. note::
We would love to extend the tutorials to cover more topics and use cases, so please let us know if
you have any suggestions.
We recommend that you go through the tutorials in the order they are listed here.
.. toctree::
:maxdepth: 2
00_sim/index
01_assets/index
02_scene/index
03_envs/index
04_sensors/index
05_controllers/index
| 714 | reStructuredText | 27.599999 | 102 | 0.736695 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/01_assets/run_articulation.rst | .. _tutorial-interact-articulation:
Interacting with an articulation
================================
.. currentmodule:: omni.isaac.orbit
This tutorial shows how to interact with an articulated robot in the simulation. It is a continuation of the
:ref:`tutorial-interact-rigid-object` tutorial, where we learned how to interact with a rigid object.
On top of setting the root state, we will see how to set the joint state and apply commands to the articulated
robot.
The Code
~~~~~~~~
The tutorial corresponds to the ``run_articulation.py`` script in the ``orbit/source/standalone/tutorials/01_assets``
directory.
.. dropdown:: Code for run_articulation.py
:icon: code
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_articulation.py
:language: python
:emphasize-lines: 60-71, 93-106, 110-113, 118-119
:linenos:
The Code Explained
~~~~~~~~~~~~~~~~~~
Designing the scene
-------------------
Similar to the previous tutorial, we populate the scene with a ground plane and a distant light. Instead of
spawning rigid objects, we now spawn a cart-pole articulation from its USD file. The cart-pole is a simple robot
consisting of a cart and a pole attached to it. The cart is free to move along the x-axis, and the pole is free to
rotate about the cart. The USD file for the cart-pole contains the robot's geometry, joints, and other physical
properties.
For the cart-pole, we use its pre-defined configuration object, which is an instance of the
:class:`assets.ArticulationCfg` class. This class contains information about the articulation's spawning strategy,
default initial state, actuator models for different joints, and other meta-information. A deeper-dive into how to
create this configuration object is provided in the :ref:`how-to-write-articulation-config` tutorial.
As seen in the previous tutorial, we can spawn the articulation into the scene in a similar fashion by creating
an instance of the :class:`assets.Articulation` class by passing the configuration object to its constructor.
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_articulation.py
:language: python
:start-at: # Create separate groups called "Origin1", "Origin2", "Origin3"
:end-at: cartpole = Articulation(cfg=cartpole_cfg)
Running the simulation loop
---------------------------
Continuing from the previous tutorial, we reset the simulation at regular intervals, set commands to the articulation,
step the simulation, and update the articulation's internal buffers.
Resetting the simulation
""""""""""""""""""""""""
Similar to a rigid object, an articulation also has a root state. This state corresponds to the root body in the
articulation tree. On top of the root state, an articulation also has joint states. These states correspond to the
joint positions and velocities.
To reset the articulation, we first set the root state by calling the :meth:`Articulation.write_root_state_to_sim`
method. Similarly, we set the joint states by calling the :meth:`Articulation.write_joint_state_to_sim` method.
Finally, we call the :meth:`Articulation.reset` method to reset any internal buffers and caches.
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_articulation.py
:language: python
:start-at: # reset the scene entities
:end-at: robot.reset()
Stepping the simulation
"""""""""""""""""""""""
Applying commands to the articulation involves two steps:
1. *Setting the joint targets*: This sets the desired joint position, velocity, or effort targets for the articulation.
2. *Writing the data to the simulation*: Based on the articulation's configuration, this step handles any
:ref:`actuation conversions <feature-actuators>` and writes the converted values to the PhysX buffer.
In this tutorial, we control the articulation using joint effort commands. For this to work, we need to set the
articulation's stiffness and damping parameters to zero. This is done a-priori inside the cart-pole's pre-defined
configuration object.
At every step, we randomly sample joint efforts and set them to the articulation by calling the
:meth:`Articulation.set_joint_effort_target` method. After setting the targets, we call the
:meth:`Articulation.write_data_to_sim` method to write the data to the PhysX buffer. Finally, we step
the simulation.
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_articulation.py
:language: python
:start-at: # Apply random action
:end-at: robot.write_data_to_sim()
Updating the state
""""""""""""""""""
Every articulation class contains a :class:`assets.ArticulationData` object. This stores the state of the
articulation. To update the state inside the buffer, we call the :meth:`assets.Articulation.update` method.
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_articulation.py
:language: python
:start-at: # Update buffers
:end-at: robot.update(sim_dt)
The Code Execution
~~~~~~~~~~~~~~~~~~
To run the code and see the results, let's run the script from the terminal:
.. code-block:: bash
./orbit.sh -p source/standalone/tutorials/01_assets/run_articulation.py
This command should open a stage with a ground plane, lights, and two cart-poles that are moving around randomly.
To stop the simulation, you can either close the window, press the ``STOP`` button in the UI, or press ``Ctrl+C``
in the terminal.
In this tutorial, we learned how to create and interact with a simple articulation. We saw how to set the state
of an articulation (its root and joint state) and how to apply commands to it. We also saw how to update its
buffers to read the latest state from the simulation.
In addition to this tutorial, we also provide a few other scripts that spawn different robots.These are included
in the ``orbit/source/standalone/demos`` directory. You can run these scripts as:
.. code-block:: bash
# Spawn many different single-arm manipulators
./orbit.sh -p source/standalone/demos/arms.py
# Spawn many different quadrupeds
./orbit.sh -p source/standalone/demos/quadrupeds.py
| 6,130 | reStructuredText | 42.176056 | 119 | 0.745351 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/01_assets/run_rigid_object.rst | .. _tutorial-interact-rigid-object:
Interacting with a rigid object
===============================
.. currentmodule:: omni.isaac.orbit
In the previous tutorials, we learned the essential workings of the standalone script and how to
spawn different objects (or *prims*) into the simulation. This tutorial shows how to create and interact
with a rigid object. For this, we will use the :class:`assets.RigidObject` class provided in Orbit.
The Code
~~~~~~~~
The tutorial corresponds to the ``run_rigid_object.py`` script in the ``orbit/source/standalone/tutorials/01_assets`` directory.
.. dropdown:: Code for run_rigid_object.py
:icon: code
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_rigid_object.py
:language: python
:emphasize-lines: 57-76, 78-80, 100-110, 113-114, 120-121, 134-136, 141-142
:linenos:
The Code Explained
~~~~~~~~~~~~~~~~~~
In this script, we split the ``main`` function into two separate functions, which highlight the two main
steps of setting up any simulation in the simulator:
1. **Design scene**: As the name suggests, this part is responsible for adding all the prims to the scene.
2. **Run simulation**: This part is responsible for stepping the simulator, interacting with the prims
in the scene, e.g., changing their poses, and applying any commands to them.
A distinction between these two steps is necessary because the second step only happens after the first step
is complete and the simulator is reset. Once the simulator is reset (which automatically plays the simulation),
no new (physics-enabled) prims should be added to the scene as it may lead to unexpected behaviors. However,
the prims can be interacted with through their respective handles.
Designing the scene
-------------------
Similar to the previous tutorial, we populate the scene with a ground plane and a light source. In addition,
we add a rigid object to the scene using the :class:`assets.RigidObject` class. This class is responsible for
spawning the prims at the input path and initializes their corresponding rigid body physics handles.
In this tutorial, we create a conical rigid object using the spawn configuration similar to the rigid cone
in the :ref:`Spawn Objects <tutorial-spawn-prims>` tutorial. The only difference is that now we wrap
the spawning configuration into the :class:`assets.RigidObjectCfg` class. This class contains information about
the asset's spawning strategy, default initial state, and other meta-information. When this class is passed to
the :class:`assets.RigidObject` class, it spawns the object and initializes the corresponding physics handles
when the simulation is played.
As an example on spawning the rigid object prim multiple times, we create its parent Xform prims,
``/World/Origin{i}``, that correspond to different spawn locations. When the regex expression
``/World/Origin*/Cone`` is passed to the :class:`assets.RigidObject` class, it spawns the rigid object prim at
each of the ``/World/Origin{i}`` locations. For instance, if ``/World/Origin1`` and ``/World/Origin2`` are
present in the scene, the rigid object prims are spawned at the locations ``/World/Origin1/Cone`` and
``/World/Origin2/Cone`` respectively.
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_rigid_object.py
:language: python
:start-at: # Create separate groups called "Origin1", "Origin2", "Origin3"
:end-at: cone_object = RigidObject(cfg=cone_cfg)
Since we want to interact with the rigid object, we pass this entity back to the main function. This entity
is then used to interact with the rigid object in the simulation loop. In later tutorials, we will see a more
convenient way to handle multiple scene entities using the :class:`scene.InteractiveScene` class.
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_rigid_object.py
:language: python
:start-at: # return the scene information
:end-at: return scene_entities, origins
Running the simulation loop
---------------------------
We modify the simulation loop to interact with the rigid object to include three steps -- resetting the
simulation state at fixed intervals, stepping the simulation, and updating the internal buffers of the
rigid object. For the convenience of this tutorial, we extract the rigid object's entity from the scene
dictionary and store it in a variable.
Resetting the simulation state
""""""""""""""""""""""""""""""
To reset the simulation state of the spawned rigid object prims, we need to set their pose and velocity.
Together they define the root state of the spawned rigid objects. It is important to note that this state
is defined in the **simulation world frame**, and not of their parent Xform prim. This is because the physics
engine only understands the world frame and not the parent Xform prim's frame. Thus, we need to transform
desired state of the rigid object prim into the world frame before setting it.
We use the :attr:`assets.RigidObject.data.default_root_state` attribute to get the default root state of the
spawned rigid object prims. This default state can be configured from the :attr:`assets.RigidObjectCfg.init_state`
attribute, which we left as identity in this tutorial. We then randomize the translation of the root state and
set the desired state of the rigid object prim using the :meth:`assets.RigidObject.write_root_state_to_sim` method.
As the name suggests, this method writes the root state of the rigid object prim into the simulation buffer.
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_rigid_object.py
:language: python
:start-at: # reset root state
:end-at: cone_object.reset()
Stepping the simulation
"""""""""""""""""""""""
Before stepping the simulation, we perform the :meth:`assets.RigidObject.write_data_to_sim` method. This method
writes other data, such as external forces, into the simulation buffer. In this tutorial, we do not apply any
external forces to the rigid object, so this method is not necessary. However, it is included for completeness.
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_rigid_object.py
:language: python
:start-at: # apply sim data
:end-at: cone_object.write_data_to_sim()
Updating the state
""""""""""""""""""
After stepping the simulation, we update the internal buffers of the rigid object prims to reflect their new state
inside the :class:`assets.RigidObject.data` attribute. This is done using the :meth:`assets.RigidObject.update` method.
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_rigid_object.py
:language: python
:start-at: # update buffers
:end-at: cone_object.update(sim_dt)
The Code Execution
~~~~~~~~~~~~~~~~~~
Now that we have gone through the code, let's run the script and see the result:
.. code-block:: bash
./orbit.sh -p source/standalone/tutorials/01_assets/run_rigid_object.py
This should open a stage with a ground plane, lights, and several green cones. The cones must be dropping from
a random height and settling on to the ground. To stop the simulation, you can either close the window, or press
the ``STOP`` button in the UI, or press ``Ctrl+C`` in the terminal
This tutorial showed how to spawn rigid objects and wrap them in a :class:`RigidObject` class to initialize their
physics handles which allows setting and obtaining their state. In the next tutorial, we will see how to interact
with an articulated object which is a collection of rigid objects connected by joints.
| 7,574 | reStructuredText | 50.182432 | 128 | 0.750594 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/01_assets/index.rst | Interacting with Assets
=======================
Having spawned objects in the scene, these tutorials show you how to create physics handles for these
objects and interact with them. These revolve around the :class:`~omni.isaac.orbit.assets.AssetBase`
class and its derivatives such as :class:`~omni.isaac.orbit.assets.RigidObject` and
:class:`~omni.isaac.orbit.assets.Articulation`.
.. toctree::
:maxdepth: 1
:titlesonly:
run_rigid_object
run_articulation
| 475 | reStructuredText | 30.733331 | 101 | 0.726316 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/02_scene/create_scene.rst | .. _tutorial-interactive-scene:
Using the Interactive Scene
===========================
.. currentmodule:: omni.isaac.orbit
So far in the tutorials, we manually spawned assets into the simulation and created
object instances to interact with them. However, as the complexity of the scene
increases, it becomes tedious to perform these tasks manually. In this tutorial,
we will introduce the :class:`scene.InteractiveScene` class, which provides a convenient
interface for spawning prims and managing them in the simulation.
At a high-level, the interactive scene is a collection of scene entities. Each entity
can be either a non-interactive prim (e.g. ground plane, light source), an interactive
prim (e.g. articulation, rigid object), or a sensor (e.g. camera, lidar). The interactive
scene provides a convenient interface for spawning these entities and managing them
in the simulation.
Compared the manual approach, it provides the following benefits:
* Alleviates the user needing to spawn each asset separately as this is handled implicitly.
* Enables user-friendly cloning of scene prims for multiple environments.
* Collects all the scene entities into a single object, which makes them easier to manage.
In this tutorial, we take the cartpole example from the :ref:`tutorial-interact-articulation`
tutorial and replace the ``design_scene`` function with an :class:`scene.InteractiveScene` object.
While it may seem like overkill to use the interactive scene for this simple example, it will
become more useful in the future as more assets and sensors are added to the scene.
The Code
~~~~~~~~
This tutorial corresponds to the ``create_scene.py`` script within
``orbit/source/standalone/tutorials/02_scene``.
.. dropdown:: Code for create_scene.py
:icon: code
.. literalinclude:: ../../../../source/standalone/tutorials/02_scene/create_scene.py
:language: python
:emphasize-lines: 52-65, 70-72, 93-94, 101-102, 107-108, 118-120
:linenos:
The Code Explained
~~~~~~~~~~~~~~~~~~
While the code is similar to the previous tutorial, there are a few key differences
that we will go over in detail.
Scene configuration
-------------------
The scene is composed of a collection of entities, each with their own configuration.
These are specified in a configuration class that inherits from :class:`scene.InteractiveSceneCfg`.
The configuration class is then passed to the :class:`scene.InteractiveScene` constructor
to create the scene.
For the cartpole example, we specify the same scene as in the previous tutorial, but list
them now in the configuration class :class:`CartpoleSceneCfg` instead of manually spawning them.
.. literalinclude:: ../../../../source/standalone/tutorials/02_scene/create_scene.py
:language: python
:pyobject: CartpoleSceneCfg
The variable names in the configuration class are used as keys to access the corresponding
entity from the :class:`scene.InteractiveScene` object. For example, the cartpole can
be accessed via ``scene["cartpole"]``. However, we will get to that later. First, let's
look at how individual scene entities are configured.
Similar to how a rigid object and articulation were configured in the previous tutorials,
the configurations are specified using a configuration class. However, there is a key
difference between the configurations for the ground plane and light source and the
configuration for the cartpole. The ground plane and light source are non-interactive
prims, while the cartpole is an interactive prim. This distinction is reflected in the
configuration classes used to specify them. The configurations for the ground plane and
light source are specified using an instance of the :class:`assets.AssetBaseCfg` class
while the cartpole is configured using an instance of the :class:`assets.ArticulationCfg`.
Anything that is not an interactive prim (i.e., neither an asset nor a sensor) is not
*handled* by the scene during simulation steps.
Another key difference to note is in the specification of the prim paths for the
different prims:
* Ground plane: ``/World/defaultGroundPlane``
* Light source: ``/World/Light``
* Cartpole: ``{ENV_REGEX_NS}/Robot``
As we learned earlier, Omniverse creates a graph of prims in the USD stage. The prim
paths are used to specify the location of the prim in the graph. The ground plane and
light source are specified using absolute paths, while the cartpole is specified using
a relative path. The relative path is specified using the ``ENV_REGEX_NS`` variable,
which is a special variable that is replaced with the environment name during scene creation.
Any entity that has the ``ENV_REGEX_NS`` variable in its prim path will be cloned for each
environment. This path is replaced by the scene object with ``/World/envs/env_{i}`` where
``i`` is the environment index.
Scene instantiation
-------------------
Unlike before where we called the ``design_scene`` function to create the scene, we now
create an instance of the :class:`scene.InteractiveScene` class and pass in the configuration
object to its constructor. While creating the configuration instance of ``CartpoleSceneCfg``
we specify how many environment copies we want to create using the ``num_envs`` argument.
This will be used to clone the scene for each environment.
.. literalinclude:: ../../../../source/standalone/tutorials/02_scene/create_scene.py
:language: python
:start-at: # Design scene
:end-at: scene = InteractiveScene(scene_cfg)
Accessing scene elements
------------------------
Similar to how entities were accessed from a dictionary in the previous tutorials, the
scene elements can be accessed from the :class:`InteractiveScene` object using the
``[]`` operator. The operator takes in a string key and returns the corresponding
entity. The key is specified through the configuration class for each entity. For example,
the cartpole is specified using the key ``"cartpole"`` in the configuration class.
.. literalinclude:: ../../../../source/standalone/tutorials/02_scene/create_scene.py
:language: python
:start-at: # Extract scene entities
:end-at: robot = scene["cartpole"]
Running the simulation loop
---------------------------
The rest of the script looks similar to previous scripts that interfaced with :class:`assets.Articulation`,
with a few small differences in the methods called:
* :meth:`assets.Articulation.reset` ⟶ :meth:`scene.InteractiveScene.reset`
* :meth:`assets.Articulation.write_data_to_sim` ⟶ :meth:`scene.InteractiveScene.write_data_to_sim`
* :meth:`assets.Articulation.update` ⟶ :meth:`scene.InteractiveScene.update`
Under the hood, the methods of :class:`scene.InteractiveScene` call the corresponding
methods of the entities in the scene.
The Code Execution
~~~~~~~~~~~~~~~~~~
Let's run the script to simulate 32 cartpoles in the scene. We can do this by passing
the ``--num_envs`` argument to the script.
.. code-block:: bash
./orbit.sh -p source/standalone/tutorials/02_scene/create_scene.py --num_envs 32
This should open a stage with 32 cartpoles swinging around randomly. You can use the
mouse to rotate the camera and the arrow keys to move around the scene.
In this tutorial, we saw how to use :class:`scene.InteractiveScene` to create a
scene with multiple assets. We also saw how to use the ``num_envs`` argument
to clone the scene for multiple environments.
There are many more example usages of the :class:`scene.InteractiveSceneCfg` in the tasks found
under the ``omni.isaac.orbit_tasks`` extension. Please check out the source code to see
how they are used for more complex scenes.
| 7,600 | reStructuredText | 45.919753 | 107 | 0.761974 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/02_scene/index.rst | Creating a Scene
================
With the basic concepts of the framework covered, the tutorials move to a more intuitive scene
interface that uses the :class:`~omni.isaac.orbit.scene.InteractiveScene` class. This class
provides a higher level abstraction for creating scenes easily.
.. toctree::
:maxdepth: 1
:titlesonly:
create_scene
| 352 | reStructuredText | 26.153844 | 94 | 0.730114 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/03_envs/run_rl_training.rst | Training with an RL Agent
=========================
.. currentmodule:: omni.isaac.orbit
In the previous tutorials, we covered how to define an RL task environment, register
it into the ``gym`` registry, and interact with it using a random agent. We now move
on to the next step: training an RL agent to solve the task.
Although the :class:`envs.RLTaskEnv` conforms to the :class:`gymnasium.Env` interface,
it is not exactly a ``gym`` environment. The input and outputs of the environment are
not numpy arrays, but rather based on torch tensors with the first dimension being the
number of environment instances.
Additionally, most RL libraries expect their own variation of an environment interface.
For example, `Stable-Baselines3`_ expects the environment to conform to its
`VecEnv API`_ which expects a list of numpy arrays instead of a single tensor. Similarly,
`RSL-RL`_ and `RL-Games`_ expect a different interface. Since there is no one-size-fits-all
solution, we do not base the :class:`envs.RLTaskEnv` on any particular learning library.
Instead, we implement wrappers to convert the environment into the expected interface.
These are specified in the :mod:`omni.isaac.orbit_tasks.utils.wrappers` module.
In this tutorial, we will use `Stable-Baselines3`_ to train an RL agent to solve the
cartpole balancing task.
.. caution::
Wrapping the environment with the respective learning framework's wrapper should happen in the end,
i.e. after all other wrappers have been applied. This is because the learning framework's wrapper
modifies the interpretation of environment's APIs which may no longer be compatible with :class:`gymnasium.Env`.
The Code
--------
For this tutorial, we use the training script from `Stable-Baselines3`_ workflow in the
``orbit/source/standalone/workflows/sb3`` directory.
.. dropdown:: Code for train.py
:icon: code
.. literalinclude:: ../../../../source/standalone/workflows/sb3/train.py
:language: python
:emphasize-lines: 58, 61, 67-69, 78, 92-96, 98-99, 102-110, 112, 117-125, 127-128, 135-138
:linenos:
The Code Explained
------------------
.. currentmodule:: omni.isaac.orbit_tasks.utils
Most of the code above is boilerplate code to create logging directories, saving the parsed configurations,
and setting up different Stable-Baselines3 components. For this tutorial, the important part is creating
the environment and wrapping it with the Stable-Baselines3 wrapper.
There are three wrappers used in the code above:
1. :class:`gymnasium.wrappers.RecordVideo`: This wrapper records a video of the environment
and saves it to the specified directory. This is useful for visualizing the agent's behavior
during training.
2. :class:`wrappers.sb3.Sb3VecEnvWrapper`: This wrapper converts the environment
into a Stable-Baselines3 compatible environment.
3. `stable_baselines3.common.vec_env.VecNormalize`_: This wrapper normalizes the
environment's observations and rewards.
Each of these wrappers wrap around the previous wrapper by following ``env = wrapper(env, *args, **kwargs)``
repeatedly. The final environment is then used to train the agent. For more information on how these
wrappers work, please refer to the :ref:`how-to-env-wrappers` documentation.
The Code Execution
------------------
We train a PPO agent from Stable-Baselines3 to solve the cartpole balancing task.
Training the agent
~~~~~~~~~~~~~~~~~~
There are three main ways to train the agent. Each of them has their own advantages and disadvantages.
It is up to you to decide which one you prefer based on your use case.
Headless execution
""""""""""""""""""
If the ``--headless`` flag is set, the simulation is not rendered during training. This is useful
when training on a remote server or when you do not want to see the simulation. Typically, it speeds
up the training process since only physics simulation step is performed.
.. code-block:: bash
./orbit.sh -p source/standalone/workflows/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64 --headless
Headless execution with off-screen render
"""""""""""""""""""""""""""""""""""""""""
Since the above command does not render the simulation, it is not possible to visualize the agent's
behavior during training. To visualize the agent's behavior, we pass the ``--offscreen_render`` which
enables off-screen rendering. Additionally, we pass the flag ``--video`` which records a video of the
agent's behavior during training.
.. code-block:: bash
./orbit.sh -p source/standalone/workflows/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64 --headless --offscreen_render --video
The videos are saved to the ``logs/sb3/Isaac-Cartpole-v0/<run-dir>/videos`` directory. You can open these videos
using any video player.
Interactive execution
"""""""""""""""""""""
.. currentmodule:: omni.isaac.orbit
While the above two methods are useful for training the agent, they don't allow you to interact with the
simulation to see what is happening. In this case, you can ignore the ``--headless`` flag and run the
training script as follows:
.. code-block:: bash
./orbit.sh -p source/standalone/workflows/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64
This will open the Isaac Sim window and you can see the agent training in the environment. However, this
will slow down the training process since the simulation is rendered on the screen. As a workaround, you
can switch between different render modes in the ``"Orbit"`` window that is docked on the bottom-right
corner of the screen. To learn more about these render modes, please check the
:class:`sim.SimulationContext.RenderMode` class.
Viewing the logs
~~~~~~~~~~~~~~~~
On a separate terminal, you can monitor the training progress by executing the following command:
.. code:: bash
# execute from the root directory of the repository
./orbit.sh -p -m tensorboard.main --logdir logs/sb3/Isaac-Cartpole-v0
Playing the trained agent
~~~~~~~~~~~~~~~~~~~~~~~~~
Once the training is complete, you can visualize the trained agent by executing the following command:
.. code:: bash
# execute from the root directory of the repository
./orbit.sh -p source/standalone/workflows/sb3/play.py --task Isaac-Cartpole-v0 --num_envs 32 --use_last_checkpoint
The above command will load the latest checkpoint from the ``logs/sb3/Isaac-Cartpole-v0``
directory. You can also specify a specific checkpoint by passing the ``--checkpoint`` flag.
.. _Stable-Baselines3: https://stable-baselines3.readthedocs.io/en/master/
.. _VecEnv API: https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html#vecenv-api-vs-gym-api
.. _`stable_baselines3.common.vec_env.VecNormalize`: https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html#vecnormalize
.. _RL-Games: https://github.com/Denys88/rl_games
.. _RSL-RL: https://github.com/leggedrobotics/rsl_rl
| 6,871 | reStructuredText | 43.623376 | 136 | 0.747781 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/03_envs/create_rl_env.rst | .. _tutorial-create-rl-env:
Creating an RL Environment
==========================
.. currentmodule:: omni.isaac.orbit
Having learnt how to create a base environment in :ref:`tutorial-create-base-env`, we will now look at how to create a
task environment for reinforcement learning.
The base environment is designed as an sense-act environment where the agent can send commands to the environment
and receive observations from the environment. This minimal interface is sufficient for many applications such as
traditional motion planning and controls. However, many applications require a task-specification which often
serves as the learning objective for the agent. For instance, in a navigation task, the agent may be required to
reach a goal location. To this end, we use the :class:`envs.RLTaskEnv` class which extends the base environment
to include a task specification.
Similar to other components in Orbit, instead of directly modifying the base class :class:`RLTaskEnv`, we
encourage users to simply implement a configuration :class:`RLTaskEnvCfg` for their task environment.
This practice allows us to separate the task specification from the environment implementation, making it easier
to reuse components of the same environment for different tasks.
In this tutorial, we will configure the cartpole environment using the :class:`RLTaskEnvCfg` to create a task
for balancing the pole upright. We will learn how to specify the task using reward terms, termination criteria,
curriculum and commands.
The Code
~~~~~~~~
For this tutorial, we use the cartpole environment defined in ``omni.isaac.orbit_tasks.classic.cartpole`` module.
.. dropdown:: Code for cartpole_env_cfg.py
:icon: code
.. literalinclude:: ../../../../source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/cartpole_env_cfg.py
:language: python
:emphasize-lines: 63-68, 124-149, 152-162, 165-169, 187-192
:linenos:
The script for running the environment ``run_cartpole_rl_env.py`` is present in the
``orbit/source/standalone/tutorials/03_envs`` directory. The script is similar to the
``cartpole_base_env.py`` script in the previous tutorial, except that it uses the
:class:`envs.RLTaskEnv` instead of the :class:`envs.BaseEnv`.
.. dropdown:: Code for run_cartpole_rl_env.py
:icon: code
.. literalinclude:: ../../../../source/standalone/tutorials/03_envs/run_cartpole_rl_env.py
:language: python
:emphasize-lines: 43-47, 61-62
:linenos:
The Code Explained
~~~~~~~~~~~~~~~~~~
We already went through parts of the above in the :ref:`tutorial-create-base-env` tutorial to learn
about how to specify the scene, observations, actions and events. Thus, in this tutorial, we
will focus only on the RL components of the environment.
In Orbit, we provide various implementations of different terms in the :mod:`envs.mdp` module. We will use
some of these terms in this tutorial, but users are free to define their own terms as well. These
are usually placed in their task-specific sub-package
(for instance, in :mod:`omni.isaac.orbit_tasks.classic.cartpole.mdp`).
Defining rewards
----------------
The :class:`managers.RewardManager` is used to compute the reward terms for the agent. Similar to the other
managers, its terms are configured using the :class:`managers.RewardTermCfg` class. The
:class:`managers.RewardTermCfg` class specifies the function or callable class that computes the reward
as well as the weighting associated with it. It also takes in dictionary of arguments, ``"params"``
that are passed to the reward function when it is called.
For the cartpole task, we will use the following reward terms:
* **Alive Reward**: Encourage the agent to stay alive for as long as possible.
* **Terminating Reward**: Similarly penalize the agent for terminating.
* **Pole Angle Reward**: Encourage the agent to keep the pole at the desired upright position.
* **Cart Velocity Reward**: Encourage the agent to keep the cart velocity as small as possible.
* **Pole Velocity Reward**: Encourage the agent to keep the pole velocity as small as possible.
.. literalinclude:: ../../../../source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/cartpole_env_cfg.py
:language: python
:pyobject: RewardsCfg
Defining termination criteria
-----------------------------
Most learning tasks happen over a finite number of steps that we call an episode. For instance, in the cartpole
task, we want the agent to balance the pole for as long as possible. However, if the agent reaches an unstable
or unsafe state, we want to terminate the episode. On the other hand, if the agent is able to balance the pole
for a long time, we want to terminate the episode and start a new one so that the agent can learn to balance the
pole from a different starting configuration.
The :class:`managers.TerminationsCfg` configures what constitutes for an episode to terminate. In this example,
we want the task to terminate when either of the following conditions is met:
* **Episode Length** The episode length is greater than the defined max_episode_length
* **Cart out of bounds** The cart goes outside of the bounds [-3, 3]
The flag :attr:`managers.TerminationsCfg.time_out` specifies whether the term is a time-out (truncation) term
or terminated term. These are used to indicate the two types of terminations as described in `Gymnasium's documentation
<https://gymnasium.farama.org/tutorials/gymnasium_basics/handling_time_limits/>`_.
.. literalinclude:: ../../../../source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/cartpole_env_cfg.py
:language: python
:pyobject: TerminationsCfg
Defining commands
-----------------
For various goal-conditioned tasks, it is useful to specify the goals or commands for the agent. These are
handled through the :class:`managers.CommandManager`. The command manager handles resampling and updating the
commands at each step. It can also be used to provide the commands as an observation to the agent.
For this simple task, we do not use any commands. This is specified by using a command term with the
:class:`envs.mdp.NullCommandCfg` configuration. However, you can see an example of command definitions in the
locomotion or manipulation tasks.
.. literalinclude:: ../../../../source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/cartpole_env_cfg.py
:language: python
:pyobject: CommandsCfg
Defining curriculum
-------------------
Often times when training a learning agent, it helps to start with a simple task and gradually increase the
tasks's difficulty as the agent training progresses. This is the idea behind curriculum learning. In Orbit,
we provide a :class:`managers.CurriculumManager` class that can be used to define a curriculum for your environment.
In this tutorial we don't implement a curriculum for simplicity, but you can see an example of a
curriculum definition in the other locomotion or manipulation tasks.
We use a simple pass-through curriculum to define a curriculum manager that does not modify the environment.
.. literalinclude:: ../../../../source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/cartpole_env_cfg.py
:language: python
:pyobject: CurriculumCfg
Tying it all together
---------------------
With all the above components defined, we can now create the :class:`RLTaskEnvCfg` configuration for the
cartpole environment. This is similar to the :class:`BaseEnvCfg` defined in :ref:`tutorial-create-base-env`,
only with the added RL components explained in the above sections.
.. literalinclude:: ../../../../source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/cartpole_env_cfg.py
:language: python
:pyobject: CartpoleEnvCfg
Running the simulation loop
---------------------------
Coming back to the ``run_cartpole_rl_env.py`` script, the simulation loop is similar to the previous tutorial.
The only difference is that we create an instance of :class:`envs.RLTaskEnv` instead of the
:class:`envs.BaseEnv`. Consequently, now the :meth:`envs.RLTaskEnv.step` method returns additional signals
such as the reward and termination status. The information dictionary also maintains logging of quantities
such as the reward contribution from individual terms, the termination status of each term, the episode length etc.
.. literalinclude:: ../../../../source/standalone/tutorials/03_envs/run_cartpole_rl_env.py
:language: python
:pyobject: main
The Code Execution
~~~~~~~~~~~~~~~~~~
Similar to the previous tutorial, we can run the environment by executing the ``run_cartpole_rl_env.py`` script.
.. code-block:: bash
./orbit.sh -p source/standalone/tutorials/03_envs/run_cartpole_rl_env.py --num_envs 32
This should open a similar simulation as in the previous tutorial. However, this time, the environment
returns more signals that specify the reward and termination status. Additionally, the individual
environments reset themselves when they terminate based on the termination criteria specified in the
configuration.
To stop the simulation, you can either close the window, or press ``Ctrl+C`` in the terminal
where you started the simulation.
In this tutorial, we learnt how to create a task environment for reinforcement learning. We do this
by extending the base environment to include the rewards, terminations, commands and curriculum terms.
We also learnt how to use the :class:`envs.RLTaskEnv` class to run the environment and receive various
signals from it.
While it is possible to manually create an instance of :class:`envs.RLTaskEnv` class for a desired task,
this is not scalable as it requires specialized scripts for each task. Thus, we exploit the
:meth:`gymnasium.make` function to create the environment with the gym interface. We will learn how to do this
in the next tutorial.
| 9,925 | reStructuredText | 49.902564 | 135 | 0.764131 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/03_envs/create_base_env.rst | .. _tutorial-create-base-env:
Creating a Base Environment
===========================
.. currentmodule:: omni.isaac.orbit
Environments bring together different aspects of the simulation such as
the scene, observations and actions spaces, reset events etc. to create a
coherent interface for various applications. In Orbit, environments are
implemented as :class:`envs.BaseEnv` and :class:`envs.RLTaskEnv` classes.
The two classes are very similar, but :class:`envs.RLTaskEnv` is useful for
reinforcement learning tasks and contains rewards, terminations, curriculum
and command generation. The :class:`envs.BaseEnv` class is useful for
traditional robot control and doesn't contain rewards and terminations.
In this tutorial, we will look at the base class :class:`envs.BaseEnv` and its
corresponding configuration class :class:`envs.BaseEnvCfg`. We will use the
cartpole environment from earlier to illustrate the different components
in creating a new :class:`envs.BaseEnv` environment.
The Code
~~~~~~~~
The tutorial corresponds to the ``create_cartpole_base_env`` script in the ``orbit/source/standalone/tutorials/03_envs``
directory.
.. dropdown:: Code for create_cartpole_base_env.py
:icon: code
.. literalinclude:: ../../../../source/standalone/tutorials/03_envs/create_cartpole_base_env.py
:language: python
:emphasize-lines: 49-53, 56-73, 76-109, 112-131, 136-140, 145, 149, 154-155, 161-162
:linenos:
The Code Explained
~~~~~~~~~~~~~~~~~~
The base class :class:`envs.BaseEnv` wraps around many intricacies of the simulation interaction
and provides a simple interface for the user to run the simulation and interact with it. It
is composed of the following components:
* :class:`scene.InteractiveScene` - The scene that is used for the simulation.
* :class:`managers.ActionManager` - The manager that handles actions.
* :class:`managers.ObservationManager` - The manager that handles observations.
* :class:`managers.EventManager` - The manager that schedules operations (such as domain randomization)
at specified simulation events. For instance, at startup, on resets, or periodic intervals.
By configuring these components, the user can create different variations of the same environment
with minimal effort. In this tutorial, we will go through the different components of the
:class:`envs.BaseEnv` class and how to configure them to create a new environment.
Designing the scene
-------------------
The first step in creating a new environment is to configure its scene. For the cartpole
environment, we will be using the scene from the previous tutorial. Thus, we omit the
scene configuration here. For more details on how to configure a scene, see
:ref:`tutorial-interactive-scene`.
Defining actions
----------------
In the previous tutorial, we directly input the action to the cartpole using
the :meth:`assets.Articulation.set_joint_effort_target` method. In this tutorial, we will
use the :class:`managers.ActionManager` to handle the actions.
The action manager can comprise of multiple :class:`managers.ActionTerm`. Each action term
is responsible for applying *control* over a specific aspect of the environment. For instance,
for robotic arm, we can have two action terms -- one for controlling the joints of the arm,
and the other for controlling the gripper. This composition allows the user to define
different control schemes for different aspects of the environment.
In the cartpole environment, we want to control the force applied to the cart to balance the pole.
Thus, we will create an action term that controls the force applied to the cart.
.. literalinclude:: ../../../../source/standalone/tutorials/03_envs/create_cartpole_base_env.py
:language: python
:pyobject: ActionsCfg
Defining observations
---------------------
While the scene defines the state of the environment, the observations define the states
that are observable by the agent. These observations are used by the agent to make decisions
on what actions to take. In Orbit, the observations are computed by the
:class:`managers.ObservationManager` class.
Similar to the action manager, the observation manager can comprise of multiple observation terms.
These are further grouped into observation groups which are used to define different observation
spaces for the environment. For instance, for hierarchical control, we may want to define
two observation groups -- one for the low level controller and the other for the high level
controller. It is assumed that all the observation terms in a group have the same dimensions.
For this tutorial, we will only define one observation group named ``"policy"``. While not completely
prescriptive, this group is a necessary requirement for various wrappers in Orbit.
We define a group by inheriting from the :class:`managers.ObservationGroupCfg` class. This class
collects different observation terms and help define common properties for the group, such
as enabling noise corruption or concatenating the observations into a single tensor.
The individual terms are defined by inheriting from the :class:`managers.ObservationTermCfg` class.
This class takes in the :attr:`managers.ObservationTermCfg.func` that specifies the function or
callable class that computes the observation for that term. It includes other parameters for
defining the noise model, clipping, scaling, etc. However, we leave these parameters to their
default values for this tutorial.
.. literalinclude:: ../../../../source/standalone/tutorials/03_envs/create_cartpole_base_env.py
:language: python
:pyobject: ObservationsCfg
Defining events
---------------
At this point, we have defined the scene, actions and observations for the cartpole environment.
The general idea for all these components is to define the configuration classes and then
pass them to the corresponding managers. The event manager is no different.
The :class:`managers.EventManager` class is responsible for events corresponding to changes
in the simulation state. This includes resetting (or randomizing) the scene, randomizing physical
properties (such as mass, friction, etc.), and varying visual properties (such as colors, textures, etc.).
Each of these are specified through the :class:`managers.EventTermCfg` class, which
takes in the :attr:`managers.EventTermCfg.func` that specifies the function or callable
class that performs the event.
Additionally, it expects the **mode** of the event. The mode specifies when the event term should be applied.
It is possible to specify your own mode. For this, you'll need to adapt the :class:`~envs.BaseEnv` class.
However, out of the box, Orbit provides three commonly used modes:
* ``"startup"`` - Event that takes place only once at environment startup.
* ``"reset"`` - Event that occurs on environment termination and reset.
* ``"interval"`` - Event that are executed at a given interval, i.e., periodically after a certain number of steps.
For this example, we define events that randomize the pole's mass on startup. This is done only once since this
operation is expensive and we don't want to do it on every reset. We also create an event to randomize the initial
joint state of the cartpole and the pole at every reset.
.. literalinclude:: ../../../../source/standalone/tutorials/03_envs/create_cartpole_base_env.py
:language: python
:pyobject: EventCfg
Tying it all together
---------------------
Having defined the scene and manager configurations, we can now define the environment configuration
through the :class:`envs.BaseEnvCfg` class. This class takes in the scene, action, observation and
event configurations.
In addition to these, it also takes in the :attr:`envs.BaseEnvCfg.sim` which defines the simulation
parameters such as the timestep, gravity, etc. This is initialized to the default values, but can
be modified as needed. We recommend doing so by defining the :meth:`__post_init__` method in the
:class:`envs.BaseEnvCfg` class, which is called after the configuration is initialized.
.. literalinclude:: ../../../../source/standalone/tutorials/03_envs/create_cartpole_base_env.py
:language: python
:pyobject: CartpoleEnvCfg
Running the simulation
----------------------
Lastly, we revisit the simulation execution loop. This is now much simpler since we have
abstracted away most of the details into the environment configuration. We only need to
call the :meth:`envs.BaseEnv.reset` method to reset the environment and :meth:`envs.BaseEnv.step`
method to step the environment. Both these functions return the observation and an info dictionary
which may contain additional information provided by the environment. These can be used by an
agent for decision-making.
The :class:`envs.BaseEnv` class does not have any notion of terminations since that concept is
specific for episodic tasks. Thus, the user is responsible for defining the termination condition
for the environment. In this tutorial, we reset the simulation at regular intervals.
.. literalinclude:: ../../../../source/standalone/tutorials/03_envs/create_cartpole_base_env.py
:language: python
:pyobject: main
An important thing to note above is that the entire simulation loop is wrapped inside the
:meth:`torch.inference_mode` context manager. This is because the environment uses PyTorch
operations under-the-hood and we want to ensure that the simulation is not slowed down by
the overhead of PyTorch's autograd engine and gradients are not computed for the simulation
operations.
The Code Execution
~~~~~~~~~~~~~~~~~~
To run the base environment made in this tutorial, you can use the following command:
.. code-block:: bash
./orbit.sh -p source/standalone/tutorials/03_envs/create_cartpole_base_env.py --num_envs 32
This should open a stage with a ground plane, light source, and cartpoles. The simulation should be
playing with random actions on the cartpole. Additionally, it opens a UI window on the bottom
right corner of the screen named ``"Orbit"``. This window contains different UI elements that
can be used for debugging and visualization.
To stop the simulation, you can either close the window, or press ``Ctrl+C`` in the terminal where you
started the simulation.
In this tutorial, we learned about the different managers that help define a base environment. We
include more examples of defining the base environment in the ``orbit/source/standalone/tutorials/03_envs``
directory. For completeness, they can be run using the following commands:
.. code-block:: bash
# Floating cube environment with custom action term for PD control
./orbit.sh -p source/standalone/tutorials/03_envs/create_cube_base_env.py --num_envs 32
# Quadrupedal locomotion environment with a policy that interacts with the environment
./orbit.sh -p source/standalone/tutorials/03_envs/create_quadruped_base_env.py --num_envs 32
In the following tutorial, we will look at the :class:`envs.RLTaskEnv` class and how to use it
to create a Markovian Decision Process (MDP).
| 11,020 | reStructuredText | 49.555046 | 121 | 0.773775 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/03_envs/register_rl_env_gym.rst | Registering an Environment
==========================
.. currentmodule:: omni.isaac.orbit
In the previous tutorial, we learned how to create a custom cartpole environment. We manually
created an instance of the environment by importing the environment class and its configuration
class.
.. dropdown:: Environment creation in the previous tutorial
:icon: code
.. literalinclude:: ../../../../source/standalone/tutorials/03_envs/run_cartpole_rl_env.py
:language: python
:start-at: # create environment configuration
:end-at: env = RLTaskEnv(cfg=env_cfg)
While straightforward, this approach is not scalable as we have a large suite of environments.
In this tutorial, we will show how to use the :meth:`gymnasium.register` method to register
environments with the ``gymnasium`` registry. This allows us to create the environment through
the :meth:`gymnasium.make` function.
.. dropdown:: Environment creation in this tutorial
:icon: code
.. literalinclude:: ../../../../source/standalone/environments/random_agent.py
:language: python
:lines: 40-51
The Code
~~~~~~~~
The tutorial corresponds to the ``random_agent.py`` script in the ``orbit/source/standalone/environments`` directory.
.. dropdown:: Code for random_agent.py
:icon: code
.. literalinclude:: ../../../../source/standalone/environments/random_agent.py
:language: python
:emphasize-lines: 39-41, 46-51
:linenos:
The Code Explained
~~~~~~~~~~~~~~~~~~
The :class:`envs.RLTaskEnv` class inherits from the :class:`gymnasium.Env` class to follow
a standard interface. However, unlike the traditional Gym environments, the :class:`envs.RLTaskEnv`
implements a *vectorized* environment. This means that multiple environment instances
are running simultaneously in the same process, and all the data is returned in a batched
fashion.
Using the gym registry
----------------------
To register an environment, we use the :meth:`gymnasium.register` method. This method takes
in the environment name, the entry point to the environment class, and the entry point to the
environment configuration class. For the cartpole environment, the following shows the registration
call in the ``omni.isaac.orbit_tasks.classic.cartpole`` sub-package:
.. literalinclude:: ../../../../source/extensions/omni.isaac.orbit_tasks/omni/isaac/orbit_tasks/classic/cartpole/__init__.py
:language: python
:lines: 10-
:emphasize-lines: 11, 12, 15
The ``id`` argument is the name of the environment. As a convention, we name all the environments
with the prefix ``Isaac-`` to make it easier to search for them in the registry. The name of the
environment is typically followed by the name of the task, and then the name of the robot.
For instance, for legged locomotion with ANYmal C on flat terrain, the environment is called
``Isaac-Velocity-Flat-Anymal-C-v0``. The version number ``v<N>`` is typically used to specify different
variations of the same environment. Otherwise, the names of the environments can become too long
and difficult to read.
The ``entry_point`` argument is the entry point to the environment class. The entry point is a string
of the form ``<module>:<class>``. In the case of the cartpole environment, the entry point is
``omni.isaac.orbit.envs:RLTaskEnv``. The entry point is used to import the environment class
when creating the environment instance.
The ``env_cfg_entry_point`` argument specifies the default configuration for the environment. The default
configuration is loaded using the :meth:`omni.isaac.orbit_tasks.utils.parse_env_cfg` function.
It is then passed to the :meth:`gymnasium.make` function to create the environment instance.
The configuration entry point can be both a YAML file or a python configuration class.
.. note::
The ``gymnasium`` registry is a global registry. Hence, it is important to ensure that the
environment names are unique. Otherwise, the registry will throw an error when registering
the environment.
Creating the environment
------------------------
To inform the ``gym`` registry with all the environments provided by the ``omni.isaac.orbit_tasks``
extension, we must import the module at the start of the script. This will execute the ``__init__.py``
file which iterates over all the sub-packages and registers their respective environments.
.. literalinclude:: ../../../../source/standalone/environments/random_agent.py
:language: python
:start-at: import omni.isaac.orbit_tasks # noqa: F401
:end-at: import omni.isaac.orbit_tasks # noqa: F401
In this tutorial, the task name is read from the command line. The task name is used to parse
the default configuration as well as to create the environment instance. In addition, other
parsed command line arguments such as the number of environments, the simulation device,
and whether to render, are used to override the default configuration.
.. literalinclude:: ../../../../source/standalone/environments/random_agent.py
:language: python
:start-at: # create environment configuration
:end-at: env = gym.make(args_cli.task, cfg=env_cfg)
Once creating the environment, the rest of the execution follows the standard resetting and stepping.
The Code Execution
~~~~~~~~~~~~~~~~~~
Now that we have gone through the code, let's run the script and see the result:
.. code-block:: bash
./orbit.sh -p source/standalone/environments/random_agent.py --task Isaac-Cartpole-v0 --num_envs 32
This should open a stage with everything similar to the previous :ref:`tutorial-create-rl-env` tutorial.
To stop the simulation, you can either close the window, or press ``Ctrl+C`` in the terminal.
In addition, you can also change the simulation device from GPU to CPU by adding the ``--cpu`` flag:
.. code-block:: bash
./orbit.sh -p source/standalone/environments/random_agent.py --task Isaac-Cartpole-v0 --num_envs 32 --cpu
With the ``--cpu`` flag, the simulation will run on the CPU. This is useful for debugging the simulation.
However, the simulation will run much slower than on the GPU.
| 6,070 | reStructuredText | 43.313868 | 124 | 0.741516 |
NVIDIA-Omniverse/orbit/docs/source/tutorials/03_envs/index.rst | Designing an Environment
========================
The following tutorials introduce the concept of environments: :class:`~omni.isaac.orbit.envs.BaseEnv`
and its derivative :class:`~omni.isaac.orbit.envs.RLTaskEnv`. These environments bring-in together
different aspects of the framework to create a simulation environment for agent interaction.
.. toctree::
:maxdepth: 1
:titlesonly:
create_base_env
create_rl_env
register_rl_env_gym
run_rl_training
| 477 | reStructuredText | 28.874998 | 102 | 0.721174 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.