code
stringlengths 66
870k
| docstring
stringlengths 19
26.7k
| func_name
stringlengths 1
138
| language
stringclasses 1
value | repo
stringlengths 7
68
| path
stringlengths 5
324
| url
stringlengths 46
389
| license
stringclasses 7
values |
---|---|---|---|---|---|---|---|
def reward_track_body_position_extended(
self,
body_state: BodyState,
ref_motion_state: ReferenceMotionState,
**kwargs,
) -> torch.Tensor:
"""
Computes a reward based on the difference between the body's extended position and the reference motion's
extended position.
This function is rewritten from _reward_teleop_body_position_extend of legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed reward for each environment.
"""
body_pos_extend = body_state.body_pos_extend
ref_body_pos_extend = ref_motion_state.body_pos_extend
diff_global_body_pos = ref_body_pos_extend - body_pos_extend
diff_global_body_pos_lower = diff_global_body_pos[:, :11]
diff_global_body_pos_upper = diff_global_body_pos[:, 11:]
diff_body_pos_dist_lower = (diff_global_body_pos_lower**2).mean(dim=-1).mean(dim=-1)
diff_body_pos_dist_upper = (diff_global_body_pos_upper**2).mean(dim=-1).mean(dim=-1)
r_body_pos_lower = torch.exp(-diff_body_pos_dist_lower / self._cfg.body_pos_lower_body_sigma)
r_body_pos_upper = torch.exp(-diff_body_pos_dist_upper / self._cfg.body_pos_upper_body_sigma)
return (
r_body_pos_lower * self._cfg.body_pos_lower_body_weight
+ r_body_pos_upper * self._cfg.body_pos_upper_body_weight
)
|
Computes a reward based on the difference between the body's extended position and the reference motion's
extended position.
This function is rewritten from _reward_teleop_body_position_extend of legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed reward for each environment.
|
reward_track_body_position_extended
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def reward_track_body_position_vr_key_points(
self,
body_state: BodyState,
ref_motion_state: ReferenceMotionState,
**kwargs,
) -> torch.Tensor:
"""
Computes a reward based on the difference between selected key points of the body's extended position
and the reference motion's extended position.
This function is rewritten from _reward_teleop_body_position_vr_3keypoints of legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed reward for each environment.
"""
body_pos_extend = body_state.body_pos_extend
ref_body_pos_extend = ref_motion_state.body_pos_extend
diff_global_body_pos = ref_body_pos_extend - body_pos_extend
diff_global_body_pos_vr_key_points = diff_global_body_pos[:, -3:]
diff_body_pos_dist_vr_key_points = (diff_global_body_pos_vr_key_points**2).mean(dim=-1).mean(dim=-1)
return torch.exp(-diff_body_pos_dist_vr_key_points / self._cfg.body_pos_vr_key_points_sigma)
|
Computes a reward based on the difference between selected key points of the body's extended position
and the reference motion's extended position.
This function is rewritten from _reward_teleop_body_position_vr_3keypoints of legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed reward for each environment.
|
reward_track_body_position_vr_key_points
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def reward_track_body_rotation(
self,
body_state: BodyState,
ref_motion_state: ReferenceMotionState,
**kwargs,
) -> torch.Tensor:
"""
Computes a reward based on the difference between the body's rotation and the reference motion's rotation.
This function is rewritten from _reward_teleop_body_rotation of legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed reward for each environment.
"""
body_rot = body_state.body_rot
ref_body_rot = ref_motion_state.body_rot
diff_global_body_rot = math_utils.quat_mul(ref_body_rot, math_utils.quat_conjugate(body_rot))
diff_global_body_rot_xyzw = math_utils.convert_quat(diff_global_body_rot, to="xyzw")
diff_global_body_angle = torch_utils.quat_to_angle_axis(diff_global_body_rot_xyzw)[0]
diff_global_body_angle_dist = (diff_global_body_angle**2).mean(dim=-1)
return torch.exp(-diff_global_body_angle_dist / self._cfg.body_rot_sigma)
|
Computes a reward based on the difference between the body's rotation and the reference motion's rotation.
This function is rewritten from _reward_teleop_body_rotation of legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed reward for each environment.
|
reward_track_body_rotation
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_torques(
self,
articulation_data: ArticulationData,
**kwargs,
) -> torch.Tensor:
"""
Computes the penalty on applied torques to minimize energy consumption.
This function is adapted from _reward_torques in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
return torch.sum(torch.square(articulation_data.applied_torque), dim=1)
|
Computes the penalty on applied torques to minimize energy consumption.
This function is adapted from _reward_torques in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_torques
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_joint_accelerations(
self,
articulation_data: ArticulationData,
**kwargs,
) -> torch.Tensor:
"""
Computes the penalty on joint acceleration of each motor.
This function is adapted from _reward_dof_acc in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
return torch.sum(torch.square(articulation_data.joint_acc), dim=1)
|
Computes the penalty on joint acceleration of each motor.
This function is adapted from _reward_dof_acc in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_joint_accelerations
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_joint_velocities(
self,
articulation_data: ArticulationData,
**kwargs,
) -> torch.Tensor:
"""
Computes the penalty on joint velocity of each motor.
This function is adapted from _reward_dof_vel in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
return torch.sum(torch.square(articulation_data.joint_vel), dim=1)
|
Computes the penalty on joint velocity of each motor.
This function is adapted from _reward_dof_vel in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_joint_velocities
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_lower_body_action_changes(
self,
previous_actions: torch.Tensor,
actions: torch.Tensor,
**kwargs,
) -> torch.Tensor:
"""
Computes the penalty for action changes in the lower body.
This function is adapted from _reward_lower_action_rate in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
# Joints 0 - 10 are lower body joints in Isaac Gym.
return torch.sum(torch.square(previous_actions[:, :11] - actions[:, :11]), dim=1)
|
Computes the penalty for action changes in the lower body.
This function is adapted from _reward_lower_action_rate in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_lower_body_action_changes
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_upper_body_action_changes(
self,
previous_actions: torch.Tensor,
actions: torch.Tensor,
**kwargs,
) -> torch.Tensor:
"""
Computes the penalty for action changes in the upper body.
This function is adapted from _reward_upper_action_rate in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
# Joints 11 - 19 are upper body joints in Isaac Gym.
return torch.sum(torch.square(previous_actions[:, 11:] - actions[:, 11:]), dim=1)
|
Computes the penalty for action changes in the upper body.
This function is adapted from _reward_upper_action_rate in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_upper_body_action_changes
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_early_termination(self, reset_buf: torch.Tensor, timeout_buf: torch.Tensor, **kwargs):
"""
Computes the penalty for episodes that terminate before timeout.
This function is adapted from `_reward_termination` in `legged_gym`.
Returns:
torch.Tensor: A tensor of shape (num_envs) representing the computed penalty for each environment.
"""
# Terminal reward / penalty
return (reset_buf * ~timeout_buf).float()
|
Computes the penalty for episodes that terminate before timeout.
This function is adapted from `_reward_termination` in `legged_gym`.
Returns:
torch.Tensor: A tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_early_termination
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_stumble(self, **kwargs):
"""
Computes the penalty for stumbling.
This function is adapted from _reward_stumble in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
feet_contact_forces = self._get_feet_contact_forces()
return torch.any(
torch.norm(feet_contact_forces[:, :, :2], dim=2) > 5 * torch.abs(feet_contact_forces[:, :, 2]), dim=1
).float()
|
Computes the penalty for stumbling.
This function is adapted from _reward_stumble in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_stumble
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_slippage(self, body_state: BodyState, **kwargs):
"""
Computes the penalty for slippage.
This function is adapted from _reward_slippage in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
feet_vel = body_state.body_lin_vel[:, self._body_state_feet_ids]
feet_contact_forces = self._get_feet_contact_forces()
return torch.sum(torch.norm(feet_vel, dim=-1) * (torch.norm(feet_contact_forces, dim=-1) > 1.0), dim=1)
|
Computes the penalty for slippage.
This function is adapted from _reward_slippage in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_slippage
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_feet_orientation(self, body_state: BodyState, **kwargs):
"""
Computes the penalty on feet orientation to make no x and y projected gravity.
This function is adapted from _reward_feet_ori in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
left_quat = body_state.body_rot[:, self._body_state_feet_ids[0]]
left_gravity = math_utils.quat_rotate_inverse(left_quat, self._gravity_vec)
right_quat = body_state.body_rot[:, self._body_state_feet_ids[1]]
right_gravity = math_utils.quat_rotate_inverse(right_quat, self._gravity_vec)
return (
torch.sum(torch.square(left_gravity[:, :2]), dim=1) ** 0.5
+ torch.sum(torch.square(right_gravity[:, :2]), dim=1) ** 0.5
)
|
Computes the penalty on feet orientation to make no x and y projected gravity.
This function is adapted from _reward_feet_ori in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_feet_orientation
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_feet_air_time(self, ref_motion_state: ReferenceMotionState, **kwargs):
"""
Computes the penalty for the time that the feet are in the air before the newest contact with the terrain.
This function is adapted from _reward_feet_air_time_teleop in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
ref_pelvis_vel_xy = ref_motion_state.body_lin_vel[:, 0, :2]
first_contact = self._get_feet_first_contact()
last_feet_air_time = self._get_last_air_time_for_feet()
reward = torch.sum(
(last_feet_air_time - 0.25) * first_contact, dim=1
) # reward only on first contact with the ground
reward *= torch.norm(ref_pelvis_vel_xy, dim=1) > 0.1 # no reward for low ref motion velocity (root xy velocity)
return reward
|
Computes the penalty for the time that the feet are in the air before the newest contact with the terrain.
This function is adapted from _reward_feet_air_time_teleop in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_feet_air_time
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_both_feet_in_air(self, **kwargs):
"""
Computes the penalty for both feet being in the air.
This function is adapted from _reward_in_the_air in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
feet_in_air = self._get_feet_in_the_air()
return torch.all(feet_in_air, dim=1).float()
|
Computes the penalty for both feet being in the air.
This function is adapted from _reward_in_the_air in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_both_feet_in_air
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_orientation(self, articulation_data: ArticulationData, **kwarg):
"""
Computes the penalty based on a non-flat base orientation.
This function is adapted from _reward_orientation in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
projected_gravity = articulation_data.projected_gravity_b
return torch.sum(torch.square(projected_gravity[:, :2]), dim=1)
|
Computes the penalty based on a non-flat base orientation.
This function is adapted from _reward_orientation in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_orientation
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def penalize_max_feet_height_before_contact(self, body_state: BodyState, **kwargs):
"""
Computes the penalty based on the maximum height of the feet in the air before the current contact.
This function is adapted from _reward_feet_max_height_for_this_air in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
"""
first_contact = self._get_feet_first_contact()
feet_height = body_state.body_pos[:, self._body_state_feet_ids, 2]
self._feet_max_height_in_air = torch.max(self._feet_max_height_in_air, feet_height)
feet_max_height = torch.sum(
(torch.clamp_min(self._cfg.max_feet_height_limit_before_contact - self._feet_max_height_in_air, 0))
* first_contact,
dim=1,
) # reward only on first contact with the ground
feet_in_air = self._get_feet_in_the_air()
self._feet_max_height_in_air *= feet_in_air
return feet_max_height
|
Computes the penalty based on the maximum height of the feet in the air before the current contact.
This function is adapted from _reward_feet_max_height_for_this_air in legged_gym.
Returns:
torch.Tensor: A float tensor of shape (num_envs) representing the computed penalty for each environment.
|
penalize_max_feet_height_before_contact
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/neural_wbc/isaac_lab_wrapper/rewards/rewards.py
|
Apache-2.0
|
def deep_compare_dicts(dict1, dict2):
"""Recursively compare two dictionaries, including torch tensors."""
if dict1.keys() != dict2.keys():
return False
for key in dict1:
value1 = dict1[key]
value2 = dict2[key]
if isinstance(value1, dict) and isinstance(value2, dict):
if not deep_compare_dicts(value1, value2):
return False
elif isinstance(value1, torch.Tensor) and isinstance(value2, torch.Tensor):
if not torch.equal(value1, value2):
return False
elif (isinstance(value1, list) and isinstance(value2, list)) or (
isinstance(value1, tuple) and isinstance(value2, tuple)
):
if len(value1) != len(value2):
return False
for item1, item2 in zip(value1, value2):
if not deep_compare_dicts({"item": item1}, {"item": item2}):
return False
elif isinstance(value1, slice) and isinstance(value2, slice):
if (value1.start, value1.stop, value1.step) != (value2.start, value2.stop, value2.step):
return False
elif value1 != value2:
return False
return True
|
Recursively compare two dictionaries, including torch tensors.
|
deep_compare_dicts
|
python
|
NVlabs/HOVER
|
neural_wbc/isaac_lab_wrapper/tests/test_neural_wbc_env_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/isaac_lab_wrapper/tests/test_neural_wbc_env_cfg.py
|
Apache-2.0
|
def position_pd_control(env: NeuralWBCEnv, pos_actions: torch.Tensor, joint_ids=None):
"""Calculates the PD control torque based on the network output position actions"""
robot = env.robot
joint_pos = robot.joint_positions
joint_vel = robot.joint_velocities
if joint_ids:
joint_pos = joint_pos[:, joint_ids]
joint_vel = joint_vel[:, joint_ids]
torques = env.p_gains * (pos_actions - joint_pos) - env.d_gains * joint_vel
return torques
|
Calculates the PD control torque based on the network output position actions
|
position_pd_control
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/control.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/control.py
|
Apache-2.0
|
def update(self, obs_dict: dict[str, torch.Tensor | list[str] | None]) -> None:
"""Update the underlying model based on the observations from the environment/real robot.
Args:
obs_dict (dict[str, torch.Tensor]): A dictionary containing the latest robot observations.
"""
if "root_pos" in obs_dict:
self._root_position = obs_dict["root_pos"]
if "root_orientation" in obs_dict:
self._root_rotation = obs_dict["root_orientation"]
self._joint_positions = self._sim.joint_positions
self._joint_velocities = self._sim.joint_velocities
self._body_positions = self._sim.body_positions
self._body_rotations = self._sim.body_rotations
self._body_lin_vels, self._body_ang_vels = self._sim.body_velocities
|
Update the underlying model based on the observations from the environment/real robot.
Args:
obs_dict (dict[str, torch.Tensor]): A dictionary containing the latest robot observations.
|
update
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_robot.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_robot.py
|
Apache-2.0
|
def reset(self, **kwargs) -> None:
"""Resets the wrapper
Args:
kwargs (dict[str, Any], optional): key-word arguments to pass to underlying models. Defaults to None.
"""
qpos = kwargs.get("qpos")
qvel = kwargs.get("qvel")
self._sim.reset(qpos=qpos, qvel=qvel)
self.update({})
|
Resets the wrapper
Args:
kwargs (dict[str, Any], optional): key-word arguments to pass to underlying models. Defaults to None.
|
reset
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_robot.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_robot.py
|
Apache-2.0
|
def __init__(
self,
model_path: str,
sim_dt: float = 0.005,
enable_viewer: bool = False,
num_instances: int = 1,
device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
) -> None:
"""Initialize the underlying mujoco simulator
Args:
model_path: Path to the Mujoco model xml file
sim_dt: Simulation timestep
enable_viewer: Whether to enable the viewer
num_instances: Number of instances to simulate
device: torch device to use for the tensors used by the simulator
Raises:
ValueError: If num_instances is not 1
"""
assert num_instances == 1, "Only support a single instance for now."
self._model = mj.MjModel.from_xml_path(model_path)
self._model.opt.timestep = sim_dt
self._data = mj.MjData(self._model)
self.device = device
self.has_free_joint = has_free_joint(self._model)
self.joint_pos_offset = self._model.nq - self._model.nu # Because positions are in generalized coordinates
self.joint_vel_offset = self._model.nv - self._model.nu # Because velocities include the free joint
self.num_instances = num_instances
self._viewer = None
if enable_viewer:
self._viewer = MujocoVisualizer(self._model, self._data)
actuator_consistency = self._check_actuator_consistency()
assert actuator_consistency, "Only support that all the actuator use the same control type."
|
Initialize the underlying mujoco simulator
Args:
model_path: Path to the Mujoco model xml file
sim_dt: Simulation timestep
enable_viewer: Whether to enable the viewer
num_instances: Number of instances to simulate
device: torch device to use for the tensors used by the simulator
Raises:
ValueError: If num_instances is not 1
|
__init__
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def joint_names(self) -> list[str]:
"""Get the names of all joints in the model except the free floating joint.
Returns:
list[str]: List of joint names
"""
offset = 0
if self.has_free_joint:
offset = 1 # base/world joint
return [get_entity_name(self._model, "joint", i) for i in range(offset, self._model.njnt)]
|
Get the names of all joints in the model except the free floating joint.
Returns:
list[str]: List of joint names
|
joint_names
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def joint_positions(self) -> torch.Tensor:
"""Get the joint positions of the robot as tensor
Returns:
torch.Tensor: Tensor of joint positions
"""
return (
torch.from_numpy(self._data.qpos[self.joint_pos_offset :].copy())
.to(dtype=torch.float32, device=self.device)
.expand(self.num_instances, -1)
)
|
Get the joint positions of the robot as tensor
Returns:
torch.Tensor: Tensor of joint positions
|
joint_positions
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def joint_velocities(self) -> torch.Tensor:
"""Get the joint velocities of the robot as tensor
Returns:
torch.Tensor: Tensor of joint velocities
"""
return (
torch.from_numpy(self._data.qvel[self.joint_vel_offset :].copy())
.to(dtype=torch.float32, device=self.device)
.expand(self.num_instances, -1)
)
|
Get the joint velocities of the robot as tensor
Returns:
torch.Tensor: Tensor of joint velocities
|
joint_velocities
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def body_positions(self) -> torch.Tensor:
"""Get the body positions of the robot as tensor
Returns:
torch.Tensor: Tensor of body positions
"""
# NOTE: Global frame, https://mujoco.readthedocs.io/en/stable/APIreference/APItypes.html
# Get the body positions, excluding the first body (which is typically the world)
robot_body_positions = torch.from_numpy(self._data.xpos[1:].copy())
robot_body_positions = robot_body_positions.to(dtype=torch.float32, device=self.device)
robot_body_positions = robot_body_positions.expand(self.num_instances, -1, -1)
return robot_body_positions
|
Get the body positions of the robot as tensor
Returns:
torch.Tensor: Tensor of body positions
|
body_positions
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def body_rotations(self) -> torch.Tensor:
"""Get the body rotations of the robot as tensor
Returns:
torch.Tensor: Tensor of body rotations
"""
# NOTE: Global frame, https://mujoco.readthedocs.io/en/stable/APIreference/APItypes.html
# Get the body positions, excluding the first body (which is typically the world)
robot_body_rots = torch.from_numpy(self._data.xquat[1:].copy())
robot_body_rots = robot_body_rots.to(dtype=torch.float32, device=self.device)
robot_body_rots = robot_body_rots.expand(self.num_instances, -1, -1)
return robot_body_rots
|
Get the body rotations of the robot as tensor
Returns:
torch.Tensor: Tensor of body rotations
|
body_rotations
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def body_velocities(self) -> tuple[torch.Tensor, torch.Tensor]:
"""Get the body linear and angular velocities of the robot as a pair of tensors
Returns:
tuple[torch.Tensor, torch.Tensor]: Tuple of linear and angular body velocities
"""
linear_velocities = torch.zeros(self.num_instances, self._model.nbody - 1, 3, device=self.device)
angular_velocities = torch.zeros(self.num_instances, self._model.nbody - 1, 3, device=self.device)
vel_frame = 0
# NOTE the last parameter indicates the frame to use for velocity calculation, options are:
# 0 - world frame, 1 - body frame, etc.
for _, body_id in self.get_body_ids().items():
# NOTE First three components are linear velocity, the next three are angular velocity
vel_store = np.zeros(6)
# Convert it back to the actual body_id inside the Mujoco model
mj_body_id = body_id + 1
mj.mj_objectVelocity(self._model, self._data, mj.mjtObj.mjOBJ_BODY, mj_body_id, vel_store, vel_frame)
angular_velocities[:, body_id, :] = torch.from_numpy(vel_store[:3]).to(
dtype=torch.float32, device=self.device
)
linear_velocities[:, body_id, :] = torch.from_numpy(vel_store[3:]).to(
dtype=torch.float32, device=self.device
)
return linear_velocities, angular_velocities
|
Get the body linear and angular velocities of the robot as a pair of tensors
Returns:
tuple[torch.Tensor, torch.Tensor]: Tuple of linear and angular body velocities
|
body_velocities
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def step(self, actions: np.ndarray | None = None, nsteps: int = 1) -> None:
"""Step the simulation forward nsteps with the given action.
Args:
actions (np.ndarray | None, optional): Action to apply to the robot. Defaults to None.
nsteps (int, optional): Number of steps to take. Defaults to 1.
"""
if actions is None:
actions = np.zeros((self.num_instances, self._model.nu))
if actions.shape != (self.num_instances, self._model.nu):
raise ValueError(
f"Action shape {actions.shape} does not match number of actuators"
f" {(self.num_instances, self._model.nu)}"
)
self._data.ctrl[:] = actions
for _ in range(nsteps):
mj.mj_step(self._model, self._data)
self.update_viewer()
|
Step the simulation forward nsteps with the given action.
Args:
actions (np.ndarray | None, optional): Action to apply to the robot. Defaults to None.
nsteps (int, optional): Number of steps to take. Defaults to 1.
|
step
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def reset(
self,
qpos: np.ndarray | torch.Tensor | None = None,
qvel: np.ndarray | torch.Tensor | None = None,
) -> None:
"""Reset the model to its initial state
Args:
qpos (np.ndarray | torch.Tensor | None, optional): Positions of the generalized coordinates. Defaults to None.
qvel (np.ndarray | torch.Tensor | None, optional): Velocities of the generalized coordinates. Defaults to None.
"""
mj.mj_resetData(self._model, self._data)
self.set_robot_state(qpos, qvel)
|
Reset the model to its initial state
Args:
qpos (np.ndarray | torch.Tensor | None, optional): Positions of the generalized coordinates. Defaults to None.
qvel (np.ndarray | torch.Tensor | None, optional): Velocities of the generalized coordinates. Defaults to None.
|
reset
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def set_robot_state(
self, qpos: np.ndarray | torch.Tensor | None = None, qvel: np.ndarray | torch.Tensor | None = None
):
"""
Set robot state including positions and velocities of the generalized coordinates.
Args:
qpos (np.ndarray | torch.Tensor | None, optional): Positions of the generalized coordinates. Defaults to None.
qvel (np.ndarray | torch.Tensor | None, optional): Velocities of the generalized coordinates. Defaults to None.
"""
if qpos is not None:
# Ensure qpos length matches number of joints
qpos = squeeze_if_tensor(qpos)
qpos = to_numpy(qpos)
assert len(qpos) == self._model.nq, f"qpos length {len(qpos)} doesn't match model DoF {self._model.nq}"
self._data.qpos[:] = qpos
if qvel is not None:
# Ensure qvel length matches number of joints
qvel = squeeze_if_tensor(qvel)
qvel = to_numpy(qvel)
assert len(qvel) == self._model.nv, f"qvel length {len(qvel)} doesn't match model DoF {self._model.nv}"
self._data.qvel[:] = qvel
# Forward to update robot state
self.forward()
|
Set robot state including positions and velocities of the generalized coordinates.
Args:
qpos (np.ndarray | torch.Tensor | None, optional): Positions of the generalized coordinates. Defaults to None.
qvel (np.ndarray | torch.Tensor | None, optional): Velocities of the generalized coordinates. Defaults to None.
|
set_robot_state
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def get_body_ids(self, body_names: list[str] | None = None, free_joint_offset: int = 1) -> dict[str, int]:
"""Get the IDs of all bodies in the model, indexed after removing the world body.
Args:
body_names (list[str] | None, optional): Names of the bodies. Defaults to None.
free_joint_offset (int, optional): Offset to remove the free joint. Defaults to 1.
Returns:
dict[str, int]: Mapping from body name to body id.
"""
body_names_ = body_names if body_names else self.body_names
body_ids = {}
for name in body_names_:
id_ = get_entity_id(self._model, "body", name)
if id_ > 0:
body_ids[name] = id_ - free_joint_offset
else:
body_ids[name] = id_
return body_ids
|
Get the IDs of all bodies in the model, indexed after removing the world body.
Args:
body_names (list[str] | None, optional): Names of the bodies. Defaults to None.
free_joint_offset (int, optional): Offset to remove the free joint. Defaults to 1.
Returns:
dict[str, int]: Mapping from body name to body id.
|
get_body_ids
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def get_joint_ids(self, joint_names: list[str] | None = None, free_joint_offset: int = 1) -> dict[str, int]:
"""Get the IDs of all joints in the model, indexed after removing the free joint.
Args:
joint_names (list[str] | None, optional): Names of the joints. Defaults to None.
free_joint_offset (int, optional): Offset to remove the free joint. Defaults to 1.
Returns:
dict[str, int]: Mapping from joint name to joint id.
"""
joint_name_ = joint_names if joint_names else self.joint_names
joint_ids = {}
for name in joint_name_:
id_ = get_entity_id(self._model, "joint", name)
if id_ > 0:
joint_ids[name] = id_ - free_joint_offset
else:
joint_ids[name] = id_
return joint_ids
|
Get the IDs of all joints in the model, indexed after removing the free joint.
Args:
joint_names (list[str] | None, optional): Names of the joints. Defaults to None.
free_joint_offset (int, optional): Offset to remove the free joint. Defaults to 1.
Returns:
dict[str, int]: Mapping from joint name to joint id.
|
get_joint_ids
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def get_body_pose(self, body_name: str = "pelvis") -> tuple[torch.Tensor, torch.Tensor]:
"""Get the position and quaternion of the base
Args:
body_name (str, optional): Name of the body. Defaults to 'pelvis'.
Returns:
tuple[torch.Tensor, torch.Tensor]: Position and quaternion of the base
"""
body_id = self.get_body_ids([body_name])[body_name]
if body_id < 0:
raise ValueError(f"Body '{body_name}' not found in the model.")
# Convert it back to the actual body_id inside the Mujoco model
body_id += 1
body_pos = (
torch.from_numpy(self._data.xpos[body_id].copy()).to(device=self.device).expand(self.num_instances, -1)
)
body_quat = (
torch.from_numpy(self._data.xquat[body_id].copy()).to(device=self.device).expand(self.num_instances, -1)
)
return body_pos, body_quat
|
Get the position and quaternion of the base
Args:
body_name (str, optional): Name of the body. Defaults to 'pelvis'.
Returns:
tuple[torch.Tensor, torch.Tensor]: Position and quaternion of the base
|
get_body_pose
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def get_base_projected_gravity(self, base_name: str = "pelvis") -> torch.Tensor:
"""Get the projection of the gravity vector to the base frame
Args:
base_name (str, optional): Name of the base. Defaults to 'pelvis'.
Returns:
torch.Tensor: Projection of the gravity vector to the base frame
"""
world_gravity = self._model.opt.gravity
# Normalize the gravity to match IsaacLab.
world_gravity = world_gravity / np.linalg.norm(world_gravity)
body_id = self.get_body_ids([base_name])[base_name]
if body_id < 0:
raise ValueError(f"Base '{base_name}' not found in the model.")
# Convert it back to the actual body_id inside the Mujoco model
body_id += 1
root_b_w = np.linalg.inv(self.data.xmat[body_id].reshape(3, 3))
grav = root_b_w @ world_gravity # Gravity vector in body frame
return torch.tensor(grav, device=self.device, dtype=torch.float32).expand(self.num_instances, -1)
|
Get the projection of the gravity vector to the base frame
Args:
base_name (str, optional): Name of the base. Defaults to 'pelvis'.
Returns:
torch.Tensor: Projection of the gravity vector to the base frame
|
get_base_projected_gravity
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def get_contact_forces_with_floor(self, body_name: str) -> torch.Tensor:
"""Get the contact forces on a given body with the floor
Args:
body_name (str): Name of the body
Returns:
torch.Tensor: Contact forces, shape (num_envs, 3), i.e. normal and two tangent directions
Notes:
Only checks contacts with the floor, and thus assumes the loaded model contains a floor geometry object.
This can be easily ensured by loading the scene.xml file that includes the specific robot model xml file.
"""
if body_name not in self.body_names:
raise ValueError(f"Body '{body_name}' not found in the model.")
zero_contact = torch.zeros(self.num_instances, 3).to(dtype=torch.float32, device=self.device)
if self._data.ncon == 0:
return zero_contact
body_id = self.get_body_ids([body_name])[body_name]
if body_id < 0:
raise ValueError(f"Base '{body_name}' not found in the model.")
body_id += 1
# NOTE: We assume existence of a floor and only check the contacts with the floor
floor_body_id = 0
for i in range(self._data.ncon):
contact = self._data.contact[i]
contact_body_id_1 = self._model.geom_bodyid[contact.geom1]
contact_body_id_2 = self._model.geom_bodyid[contact.geom2]
if {contact_body_id_1, contact_body_id_2} == {body_id, floor_body_id}:
contact_force = np.zeros(6)
mj.mj_contactForce(self._model, self._data, i, contact_force)
return torch.from_numpy(contact_force[:3].copy()).to(device=self.device).expand(self.num_instances, -1)
return zero_contact
|
Get the contact forces on a given body with the floor
Args:
body_name (str): Name of the body
Returns:
torch.Tensor: Contact forces, shape (num_envs, 3), i.e. normal and two tangent directions
Notes:
Only checks contacts with the floor, and thus assumes the loaded model contains a floor geometry object.
This can be easily ensured by loading the scene.xml file that includes the specific robot model xml file.
|
get_contact_forces_with_floor
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def print_actuator_info(self, actuators_id: list[int] | None = None):
"""Utility function to print out actuator types in the model.
Args:
actuators_id (list[int] | None, optional): Actuator ids. Defaults to None.
"""
actuators_id_ = range(self.model.nu) if actuators_id is None else actuators_id
for actuator_id in actuators_id_:
print(f"Actuator {actuator_id}:")
# Print meaning of ctrl for this actuator
if self.model.actuator_gaintype[actuator_id] == mj.mjtGain.mjGAIN_FIXED:
print("Direct force/torque control")
elif self.model.actuator_gaintype[actuator_id] == mj.mjtGain.mjGAIN_AFFINE:
if self.model.actuator_biastype[actuator_id] == mj.mjtBias.mjBIAS_NONE:
print("Force/torque control with scaling")
else:
print("Position or velocity control")
|
Utility function to print out actuator types in the model.
Args:
actuators_id (list[int] | None, optional): Actuator ids. Defaults to None.
|
print_actuator_info
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def _check_actuator_consistency(self):
"""Check whether all the actuators share the same control mode."""
actuator_type_system = None
for actuator_id in range(self.model.nu):
actuator_type = self.model.actuator_trntype[actuator_id]
if actuator_type_system is None:
actuator_type_system = actuator_type
else:
if actuator_type_system != actuator_type:
return False
return True
|
Check whether all the actuators share the same control mode.
|
_check_actuator_consistency
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_simulator.py
|
Apache-2.0
|
def get_entity_name(model: mj.MjModel, entity_type: str, entity_id: int) -> str:
"""Gets name of an entity based on ID
Args:
model (mj.MjModel): model
entity_type (str): entity type
entity_id (int): entity id
Returns:
str: entity name
"""
if entity_type == "body":
return model.body(entity_id).name
return mj.mj_id2name(model, OBJECT_MAP[entity_type], entity_id)
|
Gets name of an entity based on ID
Args:
model (mj.MjModel): model
entity_type (str): entity type
entity_id (int): entity id
Returns:
str: entity name
|
get_entity_name
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_utils.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/mujoco_utils.py
|
Apache-2.0
|
def to_numpy(x):
"""
Check if input is a PyTorch tensor and convert to numpy array if true.
Otherwise return the input unchanged.
Args:
x: Input to check and potentially convert
Returns:
numpy array if input was torch tensor, otherwise original input
"""
if isinstance(x, torch.Tensor):
return x.detach().cpu().numpy()
return x
|
Check if input is a PyTorch tensor and convert to numpy array if true.
Otherwise return the input unchanged.
Args:
x: Input to check and potentially convert
Returns:
numpy array if input was torch tensor, otherwise original input
|
to_numpy
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/utils.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/utils.py
|
Apache-2.0
|
def squeeze_if_tensor(x, dim: int = 0):
"""
Check if input is a PyTorch tensor and squeeze along the given dim if true.
Args:
x: Input to check and potentially convert
dim: Dimension to squeeze
Returns:
numpy array if input was torch tensor, otherwise original input
"""
if isinstance(x, torch.Tensor):
return x.squeeze(dim=dim)
return x
|
Check if input is a PyTorch tensor and squeeze along the given dim if true.
Args:
x: Input to check and potentially convert
dim: Dimension to squeeze
Returns:
numpy array if input was torch tensor, otherwise original input
|
squeeze_if_tensor
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/utils.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/utils.py
|
Apache-2.0
|
def draw_reference_state(self, state: ReferenceMotionState):
"""Visualize the reference state in Mujoco."""
body_pos_np = np.squeeze(state.body_pos.detach().cpu().numpy())
body_pos_extend_np = np.squeeze(state.body_pos_extend.detach().cpu().numpy())
body_pos = np.vstack([body_pos_np, body_pos_extend_np])
for i in range(body_pos.shape[0]):
self._viewer.add_marker(
pos=body_pos[i],
size=0.05,
rgba=(1, 0, 0, 1),
type=mj.mjtGeom.mjGEOM_SPHERE,
label="",
id=i,
)
|
Visualize the reference state in Mujoco.
|
draw_reference_state
|
python
|
NVlabs/HOVER
|
neural_wbc/mujoco_wrapper/mujoco_wrapper/visualization.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/mujoco_wrapper/mujoco_wrapper/visualization.py
|
Apache-2.0
|
def _produce_actions(self, obs: dict) -> torch.Tensor:
"""
Roll out the environment with either the expert policy or the student policy, depending on
the current state of training.
"""
if self._iterations + self.start_iteration >= self._cfg.student_rollout_iteration:
observations = obs["student_policy"]
action = self._student.act(observations)
else:
action = self._teacher.act_rollout(obs["teacher_policy"])
return action
|
Roll out the environment with either the expert policy or the student policy, depending on
the current state of training.
|
_produce_actions
|
python
|
NVlabs/HOVER
|
neural_wbc/student_policy/neural_wbc/student_policy/student_policy_trainer.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/student_policy/neural_wbc/student_policy/student_policy_trainer.py
|
Apache-2.0
|
def save(self, file_path: str):
"""
Save the dataclass fields to a JSON file.
Args:
file_path (str): The path to the file where the JSON will be saved.
"""
# Convert the dataclass to a dictionary
data_dict = {
field_info.name: getattr(self, field_info.name)
for field_info in fields(self)
if field_info.name != "teacher_policy"
}
# Customize the teacher_policy field in the dictionary
teacher_policy_path = self.teacher_policy.path
if teacher_policy_path is not None:
data_dict["teacher_policy"] = teacher_policy_path
# Write the dictionary to a JSON file
with open(file_path, "w") as f:
json.dump(data_dict, f, indent=4)
|
Save the dataclass fields to a JSON file.
Args:
file_path (str): The path to the file where the JSON will be saved.
|
save
|
python
|
NVlabs/HOVER
|
neural_wbc/student_policy/neural_wbc/student_policy/student_policy_trainer_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/student_policy/neural_wbc/student_policy/student_policy_trainer_cfg.py
|
Apache-2.0
|
def add_args_to_parser(parser: argparse.ArgumentParser, default_overwrites: dict = {}):
"""
Add the fields of the dataclass (except for `teacher_policy`) to an ArgumentParser.
This method iterates over the fields of the StudentPolicyTrainerCfg dataclass and adds them as arguments
to the provided ArgumentParser. The `teacher_policy` field is skipped. If a field has a default value or
a default factory, that value is used as the default for the argument. The `default_overwrites` dictionary
can be used to override the default values for specific fields.
Args:
parser (argparse.ArgumentParser): The argument parser to which the arguments will be added.
default_overwrites (dict): A dictionary of field names and their corresponding default values to overwrite.
"""
group = parser.add_argument_group("Student policy configurations")
# Value of the following fields should come from the environment or processed.
non_configurable_fields = {
"num_policy_obs",
"num_student_obs",
"num_actions",
"teacher_policy",
"student_policy_path",
}
for field_info in fields(StudentPolicyTrainerCfg):
if field_info.name in non_configurable_fields:
continue
# Set the argument name
arg_name = f"--{StudentPolicyTrainerCfg.args_prefix()}{field_info.name}"
arg_type = field_info.type
arg_help = field_info.metadata.get("description", "")
# Handle default values and types
default = None
if field_info.name in default_overwrites:
default = default_overwrites[field_info.name]
elif field_info.default is not MISSING:
default = field_info.default
elif field_info.default_factory is not MISSING:
default = field_info.default_factory()
# Add the argument to the parser
group.add_argument(arg_name, type=arg_type, default=default, help=arg_help)
|
Add the fields of the dataclass (except for `teacher_policy`) to an ArgumentParser.
This method iterates over the fields of the StudentPolicyTrainerCfg dataclass and adds them as arguments
to the provided ArgumentParser. The `teacher_policy` field is skipped. If a field has a default value or
a default factory, that value is used as the default for the argument. The `default_overwrites` dictionary
can be used to override the default values for specific fields.
Args:
parser (argparse.ArgumentParser): The argument parser to which the arguments will be added.
default_overwrites (dict): A dictionary of field names and their corresponding default values to overwrite.
|
add_args_to_parser
|
python
|
NVlabs/HOVER
|
neural_wbc/student_policy/neural_wbc/student_policy/student_policy_trainer_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/neural_wbc/student_policy/neural_wbc/student_policy/student_policy_trainer_cfg.py
|
Apache-2.0
|
def load(filepath: str) -> "TeacherPolicyCfg":
"""Loads the configuration from a YAML file."""
with open(filepath, encoding="utf-8") as file:
data = json.load(file)
cfg = fromdict(TeacherPolicyCfg, data)
return cfg
|
Loads the configuration from a YAML file.
|
load
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/teacher_policy_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/teacher_policy_cfg.py
|
Apache-2.0
|
def add_args_to_parser(parser: argparse.ArgumentParser, default_overwrites: dict = {}):
"""Adds configuration fields to an ArgumentParser."""
group = parser.add_argument_group("Teacher policy configurations (RSL RL)")
def add_fields_to_parser(fields):
for field_info in fields:
arg_name = f"--{TeacherPolicyCfg.args_prefix()}{field_info.name}"
arg_type = field_info.type
arg_help = field_info.metadata.get("description", "")
default = None
if field_info.name in default_overwrites:
default = default_overwrites[field_info.name]
elif field_info.default is not MISSING:
default = field_info.default
elif field_info.default_factory is not MISSING:
default = field_info.default_factory()
if isinstance(default, list): # Special handling for lists
group.add_argument(arg_name, type=type(default[0]), nargs="+", default=default, help=arg_help)
elif isinstance(default, bool):
if default:
group.add_argument(arg_name, action="store_false", help=arg_help)
else:
group.add_argument(arg_name, action="store_true", help=arg_help)
else:
group.add_argument(arg_name, type=arg_type, default=default, help=arg_help)
# Add fields from each section with a prefix for context
add_fields_to_parser(f for f in fields(TeacherPolicyCfg) if not is_dataclass(f.type))
add_fields_to_parser(fields(PolicyCfg))
add_fields_to_parser(fields(AlgorithmCfg))
add_fields_to_parser(fields(RunnerCfg))
|
Adds configuration fields to an ArgumentParser.
|
add_args_to_parser
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/teacher_policy_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/teacher_policy_cfg.py
|
Apache-2.0
|
def from_argparse_args(args: argparse.Namespace) -> "TeacherPolicyCfg":
"""Creates an instance from argparse arguments."""
def extract_fields(cls: Type) -> dict:
"""Helper function to extract fields for a given dataclass type."""
extracted_fields = {
field.name: getattr(args, TeacherPolicyCfg.args_prefix() + field.name)
for field in fields(cls)
if hasattr(args, TeacherPolicyCfg.args_prefix() + field.name)
}
return extracted_fields
policy_args = extract_fields(PolicyCfg)
algorithm_args = extract_fields(AlgorithmCfg)
runner_args = extract_fields(RunnerCfg)
return TeacherPolicyCfg(
seed=getattr(args, f"{TeacherPolicyCfg.args_prefix()}seed"),
policy=PolicyCfg(**policy_args),
algorithm=AlgorithmCfg(**algorithm_args),
runner=RunnerCfg(**runner_args),
)
|
Creates an instance from argparse arguments.
|
from_argparse_args
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/teacher_policy_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/teacher_policy_cfg.py
|
Apache-2.0
|
def extract_fields(cls: Type) -> dict:
"""Helper function to extract fields for a given dataclass type."""
extracted_fields = {
field.name: getattr(args, TeacherPolicyCfg.args_prefix() + field.name)
for field in fields(cls)
if hasattr(args, TeacherPolicyCfg.args_prefix() + field.name)
}
return extracted_fields
|
Helper function to extract fields for a given dataclass type.
|
extract_fields
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/teacher_policy_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/teacher_policy_cfg.py
|
Apache-2.0
|
def resolve_student_policy_path(root_path: str, teacher_policy: NeuralWBCTeacherPolicy, append_timestamp: bool):
"""
Generates a path for storing student policy configurations based on the root path and teacher policy.
This function appends a timestamp and the teacher policy name to the given root path to create a unique path
for storing student policy configurations.
Args:
root_path (str): The root directory path where the student policy configurations will be stored.
teacher_policy (NeuralWBCTeacherPolicy): The teacher policy instance whose name will be used in the generated path.
Returns:
str: The generated absolute path for storing the student policy configurations.
"""
path = os.path.join(os.path.abspath(root_path), teacher_policy.name)
if append_timestamp:
path += "_" + datetime.now().strftime("%y_%m_%d_%H-%M-%S")
return path
|
Generates a path for storing student policy configurations based on the root path and teacher policy.
This function appends a timestamp and the teacher policy name to the given root path to create a unique path
for storing student policy configurations.
Args:
root_path (str): The root directory path where the student policy configurations will be stored.
teacher_policy (NeuralWBCTeacherPolicy): The teacher policy instance whose name will be used in the generated path.
Returns:
str: The generated absolute path for storing the student policy configurations.
|
resolve_student_policy_path
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/train_student_policy.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/train_student_policy.py
|
Apache-2.0
|
def get_config_values_from_argparser(args: argparse.Namespace, teacher_policy: NeuralWBCTeacherPolicy):
"""
Extracts configuration values from command-line arguments based on a predefined prefix and field names.
This function processes the command-line arguments, filters out those that match a specified prefix,
and returns a dictionary of configuration values that are relevant to the StudentPolicyTrainerCfg dataclass.
Args:
args (argparse.Namespace): The parsed command-line arguments.
Returns:
dict: A dictionary containing configuration values extracted from the command-line arguments.
"""
# Get the set of all field names in the StudentPolicyTrainerCfg dataclass
field_names = {field_info.name for field_info in fields(StudentPolicyTrainerCfg)}
# Extract arguments from the args object and remove the prefix
args_dict = {
key.removeprefix(student_policy_args_prefix): value
for key, value in vars(args).items()
if key.startswith(student_policy_args_prefix) and key.removeprefix(student_policy_args_prefix) in field_names
}
args_dict["teacher_policy"] = teacher_policy
# Set student policy path
args_dict["student_policy_path"] = resolve_student_policy_path(
root_path=getattr(args, f"{student_policy_args_prefix}root_path"),
teacher_policy=teacher_policy,
append_timestamp=getattr(args, f"{student_policy_args_prefix}append_timestamp"),
)
return args_dict
|
Extracts configuration values from command-line arguments based on a predefined prefix and field names.
This function processes the command-line arguments, filters out those that match a specified prefix,
and returns a dictionary of configuration values that are relevant to the StudentPolicyTrainerCfg dataclass.
Args:
args (argparse.Namespace): The parsed command-line arguments.
Returns:
dict: A dictionary containing configuration values extracted from the command-line arguments.
|
get_config_values_from_argparser
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/train_student_policy.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/train_student_policy.py
|
Apache-2.0
|
def get_student_trainer_cfg(args: argparse.Namespace, env: NeuralWBCEnv, teacher_policy: NeuralWBCTeacherPolicy):
"""
Create an instance of StudentPolicyTrainerCfg from command-line arguments, environment configuration, and a teacher policy.
This function processes the command-line arguments, extracts necessary values, fills in any missing values using
the environment configuration, and creates an instance of the StudentPolicyTrainerCfg dataclass.
Args:
args (argparse.Namespace): The parsed command-line arguments.
env (NeuralWBCEnv): The environment object that provides necessary configuration values.
teacher_policy (NeuralWBCTeacherPolicy): The teacher policy instance to be included in the configuration.
Returns:
StudentPolicyTrainerCfg: An instance of the StudentPolicyTrainerCfg dataclass.
Raises:
ValueError: If a required field does not have a default value and is not provided in the arguments or the environment configuration.
"""
# First try loading pre-existing config.
previous_config = load_student_policy_trainer_cfg(args=args, teacher_policy=teacher_policy)
if previous_config:
return previous_config
args_dict = get_config_values_from_argparser(args=args, teacher_policy=teacher_policy)
# Identify fields that do not have default values and are not provided in the arguments
no_default_fields = [
field_info.name
for field_info in fields(StudentPolicyTrainerCfg)
if args_dict.get(field_info.name) is None
and field_info.default is MISSING
and field_info.default_factory is MISSING
]
# Fill in missing values for fields that do not have defaults
for name in no_default_fields:
value = None
if name == "num_policy_obs":
# Special case: get the value from the environment's num_observations attribute
value = env.num_observations
elif hasattr(env, name):
# Check if the environment has an attribute matching the field name
value = getattr(env, name)
elif hasattr(env.cfg, name):
# Check if the environment's configuration has an attribute matching the field name
value = getattr(env.cfg, name)
else:
# Raise an error if the field value cannot be determined
raise ValueError(
f"student_policy.{name} does not have a default value in the student training configuration or the"
" environment. Please specify a value."
)
# Update the args_dict with the determined value
args_dict[name] = value
return StudentPolicyTrainerCfg(**args_dict)
|
Create an instance of StudentPolicyTrainerCfg from command-line arguments, environment configuration, and a teacher policy.
This function processes the command-line arguments, extracts necessary values, fills in any missing values using
the environment configuration, and creates an instance of the StudentPolicyTrainerCfg dataclass.
Args:
args (argparse.Namespace): The parsed command-line arguments.
env (NeuralWBCEnv): The environment object that provides necessary configuration values.
teacher_policy (NeuralWBCTeacherPolicy): The teacher policy instance to be included in the configuration.
Returns:
StudentPolicyTrainerCfg: An instance of the StudentPolicyTrainerCfg dataclass.
Raises:
ValueError: If a required field does not have a default value and is not provided in the arguments or the environment configuration.
|
get_student_trainer_cfg
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/train_student_policy.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/train_student_policy.py
|
Apache-2.0
|
def load_student_policy_trainer_cfg(args: argparse.Namespace, teacher_policy: NeuralWBCTeacherPolicy):
"""
Loads the student policy trainer configuration from a specified resume path.
This function checks if a resume path is provided in the command-line arguments. If so, it loads the configuration
from a config.json at the resume path, updates it with any new arguments provided, and returns an instance of
StudentPolicyTrainerCfg.
Args:
args (argparse.Namespace): The parsed command-line arguments.
teacher_policy (NeuralWBCTeacherPolicy): The teacher policy instance to be included in the configuration.
Returns:
StudentPolicyTrainerCfg or None: An instance of the StudentPolicyTrainerCfg dataclass if a resume path is provided,
otherwise None.
"""
# Check if resume_path is specified by the user.
resume_path = getattr(args, f"{student_policy_args_prefix}resume_path")
if not resume_path:
return None
with open(os.path.join(resume_path, "config.json")) as fh:
config_dict = json.load(fh)
del config_dict["teacher_policy"]
args_dict = get_config_values_from_argparser(args=args, teacher_policy=teacher_policy)
for key, value in args_dict.items():
# Only overwrite values from arguments that are not None.
if value is not None:
config_dict[key] = value
return StudentPolicyTrainerCfg(**config_dict)
|
Loads the student policy trainer configuration from a specified resume path.
This function checks if a resume path is provided in the command-line arguments. If so, it loads the configuration
from a config.json at the resume path, updates it with any new arguments provided, and returns an instance of
StudentPolicyTrainerCfg.
Args:
args (argparse.Namespace): The parsed command-line arguments.
teacher_policy (NeuralWBCTeacherPolicy): The teacher policy instance to be included in the configuration.
Returns:
StudentPolicyTrainerCfg or None: An instance of the StudentPolicyTrainerCfg dataclass if a resume path is provided,
otherwise None.
|
load_student_policy_trainer_cfg
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/train_student_policy.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/train_student_policy.py
|
Apache-2.0
|
def get_customized_rsl_rl():
"""Helper function to ensure the correct version of rsl_rl is imported.
This function does the following:
1. Gets the installed rsl_rl package location and adds it to sys.path
2. Removes any existing rsl_rl and submodules from sys.modules to force reimporting
"""
import sys
import pkg_resources
dist = pkg_resources.require("rsl_rl")[0]
sys.path.insert(0, dist.location)
# Remove 'rsl_rl' from sys.modules if it was already imported
modules_to_remove = [key for key in sys.modules if key.startswith("rsl_rl")]
for module in modules_to_remove:
print(f"Removing {module} from sys.modules")
del sys.modules[module]
|
Helper function to ensure the correct version of rsl_rl is imported.
This function does the following:
1. Gets the installed rsl_rl package location and adds it to sys.path
2. Removes any existing rsl_rl and submodules from sys.modules to force reimporting
|
get_customized_rsl_rl
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/utils.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/utils.py
|
Apache-2.0
|
def __init__(self, env: NeuralWBCEnv):
"""Initializes the wrapper.
Note:
The wrapper calls :meth:`reset` at the start since the RSL-RL runner does not call reset.
Args:
env: The environment to wrap around.
"""
super().__init__(mode=env.cfg.mode)
# initialize the wrapper
self._env = env
# store information required by wrapper
self.num_envs = self._env.num_envs
self.device = self._env.device
self.reference_motion_manager = self._env.reference_motion_manager
self.max_episode_length = self._env.max_episode_length
if hasattr(self._env, "action_manager"):
self.num_actions = self.unwrapped.action_manager.total_action_dim
else:
self.num_actions = self._env.action_space.shape[1]
if hasattr(self._env, "observation_manager"):
self.num_obs = self._env.observation_manager.group_obs_dim["teacher_policy"][0]
else:
self.num_obs = self._env.num_observations
# -- privileged observations
if hasattr(self._env, "observation_manager") and "critic" in self._env.observation_manager.group_obs_dim:
self.num_privileged_obs = self._env.observation_manager.group_obs_dim["critic"][0]
elif hasattr(self._env, "state_space"):
self.num_privileged_obs = self._env.state_space.shape[1]
else:
self.num_privileged_obs = 0
# reset at the start since the RSL-RL runner does not call reset
self._env.reset()
|
Initializes the wrapper.
Note:
The wrapper calls :meth:`reset` at the start since the RSL-RL runner does not call reset.
Args:
env: The environment to wrap around.
|
__init__
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/vecenv_wrapper.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/vecenv_wrapper.py
|
Apache-2.0
|
def test_save_and_load(self):
"""Test saving and loading the configuration to/from a file."""
# Create a temporary file to save the configuration
with tempfile.NamedTemporaryFile(delete=False) as tmp_file:
file_path = tmp_file.name
# Save the configuration to the file
self.config.save(file_path)
# Load the configuration from the file
loaded_config = TeacherPolicyCfg.load(file_path)
# Compare the original and loaded configurations
self.assertEqual(self.config, loaded_config)
|
Test saving and loading the configuration to/from a file.
|
test_save_and_load
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/tests/test_teacher_policy_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/tests/test_teacher_policy_cfg.py
|
Apache-2.0
|
def test_add_args_to_parser(self):
"""Test adding configuration fields to an argument parser."""
parser = argparse.ArgumentParser(description="RSL-RL configuration")
TeacherPolicyCfg.add_args_to_parser(parser)
# Create sample arguments
args = [
"--teacher_policy.seed",
"42",
"--teacher_policy.init_noise_std",
"2.0",
"--teacher_policy.actor_hidden_dims",
"256",
"128",
"--teacher_policy.learning_rate",
"0.0005",
"--teacher_policy.max_iterations",
"5000",
"--teacher_policy.path",
"/tmp",
]
# Parse the arguments
parsed_args = parser.parse_args(args)
# Verify the parsed arguments using getattr
self.assertEqual(getattr(parsed_args, "teacher_policy.seed"), 42)
self.assertEqual(getattr(parsed_args, "teacher_policy.init_noise_std"), 2.0)
self.assertEqual(getattr(parsed_args, "teacher_policy.actor_hidden_dims"), [256, 128])
self.assertEqual(getattr(parsed_args, "teacher_policy.learning_rate"), 0.0005)
self.assertEqual(getattr(parsed_args, "teacher_policy.max_iterations"), 5000)
|
Test adding configuration fields to an argument parser.
|
test_add_args_to_parser
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/tests/test_teacher_policy_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/tests/test_teacher_policy_cfg.py
|
Apache-2.0
|
def test_default_values(self):
"""Test the default values of the configuration."""
# Verify default values
self.assertEqual(self.config.seed, 1)
self.assertEqual(self.config.policy.init_noise_std, 1.0)
self.assertEqual(self.config.policy.actor_hidden_dims, [512, 256, 128])
self.assertEqual(self.config.algorithm.learning_rate, 1.0e-3)
self.assertEqual(self.config.runner.max_iterations, 10000000)
|
Test the default values of the configuration.
|
test_default_values
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/tests/test_teacher_policy_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/tests/test_teacher_policy_cfg.py
|
Apache-2.0
|
def test_add_args_to_parser_with_default_overwrites(self):
"""Test adding configuration fields to an argument parser with default overwrites."""
parser = argparse.ArgumentParser(description="RSL-RL configuration")
default_overwrites = {
"seed": 10,
"init_noise_std": 0.5,
"learning_rate": 0.0015,
"max_iterations": 2000,
}
TeacherPolicyCfg.add_args_to_parser(parser, default_overwrites)
# Parse the arguments with no optional arguments
parsed_args = parser.parse_args(
[
"--teacher_policy.path",
"/tmp",
]
)
# Verify the overwritten default values
self.assertEqual(getattr(parsed_args, "teacher_policy.seed"), default_overwrites["seed"])
self.assertEqual(getattr(parsed_args, "teacher_policy.init_noise_std"), default_overwrites["init_noise_std"])
self.assertEqual(
getattr(parsed_args, "teacher_policy.learning_rate"),
default_overwrites["learning_rate"],
)
self.assertEqual(getattr(parsed_args, "teacher_policy.max_iterations"), default_overwrites["max_iterations"])
|
Test adding configuration fields to an argument parser with default overwrites.
|
test_add_args_to_parser_with_default_overwrites
|
python
|
NVlabs/HOVER
|
scripts/rsl_rl/tests/test_teacher_policy_cfg.py
|
https://github.com/NVlabs/HOVER/blob/master/scripts/rsl_rl/tests/test_teacher_policy_cfg.py
|
Apache-2.0
|
def format_seconds_to_human_readable(self, total_seconds):
"""Formats seconds into a human readable string with hours, minutes and seconds.
Args:
total_seconds (float): Number of seconds to format
Returns:
str: Formatted string in the format "Xh, Ym, Zs" where X=hours, Y=minutes, Z=seconds
"""
hours: int = total_seconds // 3600
minutes: int = (total_seconds % 3600) // 60
seconds: float = total_seconds % 60
return f"{hours:.0f}h, {minutes:.0f}m, {seconds:.1f}s"
|
Formats seconds into a human readable string with hours, minutes and seconds.
Args:
total_seconds (float): Number of seconds to format
Returns:
str: Formatted string in the format "Xh, Ym, Zs" where X=hours, Y=minutes, Z=seconds
|
format_seconds_to_human_readable
|
python
|
NVlabs/HOVER
|
third_party/rsl_rl/rsl_rl/runners/on_policy_runner.py
|
https://github.com/NVlabs/HOVER/blob/master/third_party/rsl_rl/rsl_rl/runners/on_policy_runner.py
|
Apache-2.0
|
def split_and_pad_trajectories(tensor, dones):
""" Splits trajectories at done indices. Then concatenates them and padds with zeros up to the length og the longest trajectory.
Returns masks corresponding to valid parts of the trajectories
Example:
Input: [ [a1, a2, a3, a4 | a5, a6],
[b1, b2 | b3, b4, b5 | b6]
]
Output:[ [a1, a2, a3, a4], | [ [True, True, True, True],
[a5, a6, 0, 0], | [True, True, False, False],
[b1, b2, 0, 0], | [True, True, False, False],
[b3, b4, b5, 0], | [True, True, True, False],
[b6, 0, 0, 0] | [True, False, False, False],
] | ]
Assumes that the inputy has the following dimension order: [time, number of envs, aditional dimensions]
"""
dones = dones.clone()
dones[-1] = 1
# Permute the buffers to have order (num_envs, num_transitions_per_env, ...), for correct reshaping
flat_dones = dones.transpose(1, 0).reshape(-1, 1)
# Get length of trajectory by counting the number of successive not done elements
done_indices = torch.cat((flat_dones.new_tensor([-1], dtype=torch.int64), flat_dones.nonzero()[:, 0]))
trajectory_lengths = done_indices[1:] - done_indices[:-1]
trajectory_lengths_list = trajectory_lengths.tolist()
# Extract the individual trajectories
trajectories = torch.split(tensor.transpose(1, 0).flatten(0, 1),trajectory_lengths_list)
padded_trajectories = torch.nn.utils.rnn.pad_sequence(trajectories)
trajectory_masks = trajectory_lengths > torch.arange(0, tensor.shape[0], device=tensor.device).unsqueeze(1)
return padded_trajectories, trajectory_masks
|
Splits trajectories at done indices. Then concatenates them and padds with zeros up to the length og the longest trajectory.
Returns masks corresponding to valid parts of the trajectories
Example:
Input: [ [a1, a2, a3, a4 | a5, a6],
[b1, b2 | b3, b4, b5 | b6]
]
Output:[ [a1, a2, a3, a4], | [ [True, True, True, True],
[a5, a6, 0, 0], | [True, True, False, False],
[b1, b2, 0, 0], | [True, True, False, False],
[b3, b4, b5, 0], | [True, True, True, False],
[b6, 0, 0, 0] | [True, False, False, False],
] | ]
Assumes that the inputy has the following dimension order: [time, number of envs, aditional dimensions]
|
split_and_pad_trajectories
|
python
|
NVlabs/HOVER
|
third_party/rsl_rl/rsl_rl/utils/utils.py
|
https://github.com/NVlabs/HOVER/blob/master/third_party/rsl_rl/rsl_rl/utils/utils.py
|
Apache-2.0
|
def setup(self) -> None:
"""Load the model into memory to make running multiple predictions efficient"""
# if checkpoints folder does not exist, create it
if not os.path.exists(MODEL_CACHE):
download(MODEL_URL, MODEL_CACHE)
disable_verbosity()
cv2.setNumThreads(0)
cv2.ocl.setUseOpenCL(False)
config = OmegaConf.load('./configs/inference.yaml')
model_ckpt = config.pretrained_model
model_config = config.config_file
model = create_model(model_config).cpu()
model.load_state_dict(load_state_dict(model_ckpt, location='cuda'))
self.model = model.cuda()
self.ddim_sampler = DDIMSampler(model)
|
Load the model into memory to make running multiple predictions efficient
|
setup
|
python
|
ali-vilab/AnyDoor
|
predict.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/predict.py
|
MIT
|
def predict(
self,
reference_image_path: Path = Input(description="Source Image"),
reference_image_mask: Path = Input(description="Source Image"),
bg_image_path: Path = Input(description="Target Image"),
bg_mask_path: Path = Input(description="Target Image mask"),
control_strength: float = Input(description="Control Strength", default=1.0, ge=0.0, le=2.0),
steps: int = Input(description="Steps", default=50, ge=1, le=100),
guidance_scale: float = Input(description="Guidance Scale", default=4.5, ge=0.1, le=30.0),
enable_shape_control: bool = Input(description="Enable Shape Control", default=False),
seed: int = Input(description="Random seed. Leave blank to randomize the seed", default=None),
) -> Path:
"""Run a single prediction on the model"""
if seed is None:
seed = int.from_bytes(os.urandom(4), "big")
print(f"Using seed: {seed}")
save_path = "/tmp/output.png"
image = cv2.imread(str(reference_image_path), cv2.IMREAD_UNCHANGED)
if image.shape[2] == 1:
image = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
elif image.shape[2] == 4:
image = cv2.cvtColor(image, cv2.COLOR_BGRA2BGR)
ref_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
ref_mask = (cv2.imread(str(reference_image_mask))[:,:,-1] > 128).astype(np.uint8)
# background image
back_image = cv2.imread(str(bg_image_path)).astype(np.uint8)
back_image = cv2.cvtColor(back_image, cv2.COLOR_BGR2RGB)
# background mask
tar_mask = cv2.imread(str(bg_mask_path))[:,:,0] > 128
tar_mask = tar_mask.astype(np.uint8)
gen_image = self.inference_single_image(
ref_image,ref_mask, back_image.copy(), tar_mask,
control_strength, steps, guidance_scale, seed, enable_shape_control)
h,w = back_image.shape[0], back_image.shape[0]
ref_image = cv2.resize(ref_image, (w,h))
vis_image = cv2.hconcat([gen_image])
cv2.imwrite(save_path, vis_image [:,:,::-1])
return Path(save_path)
|
Run a single prediction on the model
|
predict
|
python
|
ali-vilab/AnyDoor
|
predict.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/predict.py
|
MIT
|
def mask_score(mask):
'''Scoring the mask according to connectivity.'''
mask = mask.astype(np.uint8)
if mask.sum() < 10:
return 0
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnt_area = [cv2.contourArea(cnt) for cnt in contours]
conc_score = np.max(cnt_area) / sum(cnt_area)
return conc_score
|
Scoring the mask according to connectivity.
|
mask_score
|
python
|
ali-vilab/AnyDoor
|
datasets/data_utils.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/datasets/data_utils.py
|
MIT
|
def resize_and_pad(image, box):
'''Fitting an image to the box region while keeping the aspect ratio.'''
y1,y2,x1,x2 = box
H,W = y2-y1, x2-x1
h,w = image.shape[0], image.shape[1]
r_box = W / H
r_image = w / h
if r_box >= r_image:
h_target = H
w_target = int(w * H / h)
image = cv2.resize(image, (w_target, h_target))
w1 = (W - w_target) // 2
w2 = W - w_target - w1
pad_param = ((0,0),(w1,w2),(0,0))
image = np.pad(image, pad_param, 'constant', constant_values=255)
else:
w_target = W
h_target = int(h * W / w)
image = cv2.resize(image, (w_target, h_target))
h1 = (H-h_target) // 2
h2 = H - h_target - h1
pad_param =((h1,h2),(0,0),(0,0))
image = np.pad(image, pad_param, 'constant', constant_values=255)
return image
|
Fitting an image to the box region while keeping the aspect ratio.
|
resize_and_pad
|
python
|
ali-vilab/AnyDoor
|
datasets/data_utils.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/datasets/data_utils.py
|
MIT
|
def q_x(x_0,t=65):
'''Adding noise for and given image.'''
x_0 = torch.from_numpy(x_0).float() / 127.5 - 1
num_steps = 100
betas = torch.linspace(-6,6,num_steps)
betas = torch.sigmoid(betas)*(0.5e-2 - 1e-5)+1e-5
alphas = 1-betas
alphas_prod = torch.cumprod(alphas,0)
alphas_prod_p = torch.cat([torch.tensor([1]).float(),alphas_prod[:-1]],0)
alphas_bar_sqrt = torch.sqrt(alphas_prod)
one_minus_alphas_bar_log = torch.log(1 - alphas_prod)
one_minus_alphas_bar_sqrt = torch.sqrt(1 - alphas_prod)
noise = torch.randn_like(x_0)
alphas_t = alphas_bar_sqrt[t]
alphas_1_m_t = one_minus_alphas_bar_sqrt[t]
return (alphas_t * x_0 + alphas_1_m_t * noise).numpy() * 127.5 + 127.5
|
Adding noise for and given image.
|
q_x
|
python
|
ali-vilab/AnyDoor
|
datasets/data_utils.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/datasets/data_utils.py
|
MIT
|
def make_dataset(
*,
dataset_str: str,
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,
):
"""
Creates a dataset with the specified parameters.
Args:
dataset_str: A dataset string description (e.g. ImageNet:split=TRAIN).
transform: A transform to apply to images.
target_transform: A transform to apply to targets.
Returns:
The created dataset.
"""
logger.info(f'using dataset: "{dataset_str}"')
class_, kwargs = _parse_dataset_str(dataset_str)
dataset = class_(transform=transform, target_transform=target_transform, **kwargs)
logger.info(f"# of dataset samples: {len(dataset):,d}")
# Aggregated datasets do not expose (yet) these attributes, so add them.
if not hasattr(dataset, "transform"):
setattr(dataset, "transform", transform)
if not hasattr(dataset, "target_transform"):
setattr(dataset, "target_transform", target_transform)
return dataset
|
Creates a dataset with the specified parameters.
Args:
dataset_str: A dataset string description (e.g. ImageNet:split=TRAIN).
transform: A transform to apply to images.
target_transform: A transform to apply to targets.
Returns:
The created dataset.
|
make_dataset
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/data/loaders.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/data/loaders.py
|
MIT
|
def make_data_loader(
*,
dataset,
batch_size: int,
num_workers: int,
shuffle: bool = True,
seed: int = 0,
sampler_type: Optional[SamplerType] = SamplerType.INFINITE,
sampler_size: int = -1,
sampler_advance: int = 0,
drop_last: bool = True,
persistent_workers: bool = False,
collate_fn: Optional[Callable[[List[T]], Any]] = None,
):
"""
Creates a data loader with the specified parameters.
Args:
dataset: A dataset (third party, LaViDa or WebDataset).
batch_size: The size of batches to generate.
num_workers: The number of workers to use.
shuffle: Whether to shuffle samples.
seed: The random seed to use.
sampler_type: Which sampler to use: EPOCH, INFINITE, SHARDED_INFINITE, SHARDED_INFINITE_NEW, DISTRIBUTED or None.
sampler_size: The number of images per epoch (when applicable) or -1 for the entire dataset.
sampler_advance: How many samples to skip (when applicable).
drop_last: Whether the last non-full batch of data should be dropped.
persistent_workers: maintain the workers Dataset instances alive after a dataset has been consumed once.
collate_fn: Function that performs batch collation
"""
sampler = _make_sampler(
dataset=dataset,
type=sampler_type,
shuffle=shuffle,
seed=seed,
size=sampler_size,
advance=sampler_advance,
)
logger.info("using PyTorch data loader")
data_loader = torch.utils.data.DataLoader(
dataset,
sampler=sampler,
batch_size=batch_size,
num_workers=num_workers,
pin_memory=True,
drop_last=drop_last,
persistent_workers=persistent_workers,
collate_fn=collate_fn,
)
try:
logger.info(f"# of batches: {len(data_loader):,d}")
except TypeError: # data loader has no length
logger.info("infinite data loader")
return data_loader
|
Creates a data loader with the specified parameters.
Args:
dataset: A dataset (third party, LaViDa or WebDataset).
batch_size: The size of batches to generate.
num_workers: The number of workers to use.
shuffle: Whether to shuffle samples.
seed: The random seed to use.
sampler_type: Which sampler to use: EPOCH, INFINITE, SHARDED_INFINITE, SHARDED_INFINITE_NEW, DISTRIBUTED or None.
sampler_size: The number of images per epoch (when applicable) or -1 for the entire dataset.
sampler_advance: How many samples to skip (when applicable).
drop_last: Whether the last non-full batch of data should be dropped.
persistent_workers: maintain the workers Dataset instances alive after a dataset has been consumed once.
collate_fn: Function that performs batch collation
|
make_data_loader
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/data/loaders.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/data/loaders.py
|
MIT
|
def _generate_randperm_indices(*, size: int, generator: torch.Generator):
"""Generate the indices of a random permutation."""
dtype = _get_torch_dtype(size)
# This is actually matching PyTorch's CPU implementation, see: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/TensorFactories.cpp#L900-L921
perm = torch.arange(size, dtype=dtype)
for i in range(size):
j = torch.randint(i, size, size=(1,), generator=generator).item()
# Always swap even if no-op
value = perm[j].item()
perm[j] = perm[i].item()
perm[i] = value
yield value
|
Generate the indices of a random permutation.
|
_generate_randperm_indices
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/data/samplers.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/data/samplers.py
|
MIT
|
def __call__(self, pic):
"""
Args:
pic (PIL Image, numpy.ndarray or torch.tensor): Image to be converted to tensor.
Returns:
Tensor: Converted image.
"""
if isinstance(pic, torch.Tensor):
return pic
return super().__call__(pic)
|
Args:
pic (PIL Image, numpy.ndarray or torch.tensor): Image to be converted to tensor.
Returns:
Tensor: Converted image.
|
__call__
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/data/transforms.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/data/transforms.py
|
MIT
|
def get_local_rank() -> int:
"""
Returns:
The rank of the current process within the local (per-machine) process group.
"""
if not is_enabled():
return 0
assert 0 <= _LOCAL_RANK < _LOCAL_WORLD_SIZE
return _LOCAL_RANK
|
Returns:
The rank of the current process within the local (per-machine) process group.
|
get_local_rank
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/distributed/__init__.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/distributed/__init__.py
|
MIT
|
def get_local_size() -> int:
"""
Returns:
The size of the per-machine process group,
i.e. the number of processes per machine.
"""
if not is_enabled():
return 1
assert 0 <= _LOCAL_RANK < _LOCAL_WORLD_SIZE
return _LOCAL_WORLD_SIZE
|
Returns:
The size of the per-machine process group,
i.e. the number of processes per machine.
|
get_local_size
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/distributed/__init__.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/distributed/__init__.py
|
MIT
|
def _restrict_print_to_main_process() -> None:
"""
This function disables printing when not in the main process
"""
import builtins as __builtin__
builtin_print = __builtin__.print
def print(*args, **kwargs):
force = kwargs.pop("force", False)
if is_main_process() or force:
builtin_print(*args, **kwargs)
__builtin__.print = print
|
This function disables printing when not in the main process
|
_restrict_print_to_main_process
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/distributed/__init__.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/distributed/__init__.py
|
MIT
|
def enable(*, set_cuda_current_device: bool = True, overwrite: bool = False, allow_nccl_timeout: bool = False):
"""Enable distributed mode
Args:
set_cuda_current_device: If True, call torch.cuda.set_device() to set the
current PyTorch CUDA device to the one matching the local rank.
overwrite: If True, overwrites already set variables. Else fails.
"""
global _LOCAL_RANK, _LOCAL_WORLD_SIZE
if _LOCAL_RANK >= 0 or _LOCAL_WORLD_SIZE >= 0:
raise RuntimeError("Distributed mode has already been enabled")
torch_env = _TorchDistributedEnvironment()
torch_env.export(overwrite=overwrite)
if set_cuda_current_device:
torch.cuda.set_device(torch_env.local_rank)
if allow_nccl_timeout:
# This allows to use torch distributed timeout in a NCCL backend
key, value = "NCCL_ASYNC_ERROR_HANDLING", "1"
if not overwrite:
_check_env_variable(key, value)
os.environ[key] = value
dist.init_process_group(backend="nccl")
dist.barrier()
# Finalize setup
_LOCAL_RANK = torch_env.local_rank
_LOCAL_WORLD_SIZE = torch_env.local_world_size
_restrict_print_to_main_process()
|
Enable distributed mode
Args:
set_cuda_current_device: If True, call torch.cuda.set_device() to set the
current PyTorch CUDA device to the one matching the local rank.
overwrite: If True, overwrites already set variables. Else fails.
|
enable
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/distributed/__init__.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/distributed/__init__.py
|
MIT
|
def forward(self, features_rank):
"""
Compute the results on all values of `self.nb_knn` neighbors from the full `self.max_k`
"""
assert all(k <= self.max_k for k in self.nb_knn)
topk_sims, neighbors_labels = self.compute_neighbors(features_rank)
batch_size = neighbors_labels.shape[0]
topk_sims_transform = softmax(topk_sims / self.T, 1)
matmul = torch.mul(
one_hot(neighbors_labels, num_classes=self.num_classes),
topk_sims_transform.view(batch_size, -1, 1),
)
probas_for_k = {k: torch.sum(matmul[:, :k, :], 1) for k in self.nb_knn}
return probas_for_k
|
Compute the results on all values of `self.nb_knn` neighbors from the full `self.max_k`
|
forward
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/eval/knn.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/eval/knn.py
|
MIT
|
def eval_log_regression(
*,
model,
train_dataset,
val_dataset,
finetune_dataset,
metric_type,
batch_size,
num_workers,
finetune_on_val=False,
train_dtype=torch.float64,
train_features_device=_CPU_DEVICE,
max_train_iters=DEFAULT_MAX_ITER,
):
"""
Implements the "standard" process for log regression evaluation:
The value of C is chosen by training on train_dataset and evaluating on
finetune_dataset. Then, the final model is trained on a concatenation of
train_dataset and finetune_dataset, and is evaluated on val_dataset.
If there is no finetune_dataset, the value of C is the one that yields
the best results on a random 10% subset of the train dataset
"""
start = time.time()
train_features, train_labels = extract_features(
model, train_dataset, batch_size, num_workers, gather_on_cpu=(train_features_device == _CPU_DEVICE)
)
val_features, val_labels = extract_features(
model, val_dataset, batch_size, num_workers, gather_on_cpu=(train_features_device == _CPU_DEVICE)
)
val_data_loader = torch.utils.data.DataLoader(
TensorDataset(val_features, val_labels),
batch_size=batch_size,
drop_last=False,
num_workers=0,
persistent_workers=False,
)
if finetune_dataset is None and finetune_on_val:
logger.info("Choosing hyperparameters on the val dataset")
finetune_features, finetune_labels = val_features, val_labels
elif finetune_dataset is None and not finetune_on_val:
logger.info("Choosing hyperparameters on 10% of the train dataset")
torch.manual_seed(0)
indices = torch.randperm(len(train_features), device=train_features.device)
finetune_index = indices[: len(train_features) // 10]
train_index = indices[len(train_features) // 10 :]
finetune_features, finetune_labels = train_features[finetune_index], train_labels[finetune_index]
train_features, train_labels = train_features[train_index], train_labels[train_index]
else:
logger.info("Choosing hyperparameters on the finetune dataset")
finetune_features, finetune_labels = extract_features(
model, finetune_dataset, batch_size, num_workers, gather_on_cpu=(train_features_device == _CPU_DEVICE)
)
# release the model - free GPU memory
del model
gc.collect()
torch.cuda.empty_cache()
finetune_data_loader = torch.utils.data.DataLoader(
TensorDataset(finetune_features, finetune_labels),
batch_size=batch_size,
drop_last=False,
)
if len(train_labels.shape) > 1:
num_classes = train_labels.shape[1]
else:
num_classes = train_labels.max() + 1
logger.info("Using cuML for logistic regression")
best_stats, best_C = sweep_C_values(
train_features=train_features,
train_labels=train_labels,
test_data_loader=finetune_data_loader,
metric_type=metric_type,
num_classes=num_classes,
train_dtype=train_dtype,
train_features_device=train_features_device,
max_train_iters=max_train_iters,
)
if not finetune_on_val:
logger.info("Best parameter found, concatenating features")
train_features = torch.cat((train_features, finetune_features))
train_labels = torch.cat((train_labels, finetune_labels))
logger.info("Training final model")
logreg_metric = build_metric(metric_type, num_classes=num_classes)
evals = train_and_evaluate(
C=best_C,
max_iter=max_train_iters,
train_features=train_features,
train_labels=train_labels,
logreg_metric=logreg_metric.clone(),
test_data_loader=val_data_loader,
eval_device=torch.cuda.current_device(),
train_dtype=train_dtype,
train_features_device=train_features_device,
)
best_stats = evals[1]["metrics"]
best_stats["best_C"] = best_C
logger.info(f"Log regression evaluation done in {int(time.time() - start)}s")
return best_stats
|
Implements the "standard" process for log regression evaluation:
The value of C is chosen by training on train_dataset and evaluating on
finetune_dataset. Then, the final model is trained on a concatenation of
train_dataset and finetune_dataset, and is evaluated on val_dataset.
If there is no finetune_dataset, the value of C is the one that yields
the best results on a random 10% subset of the train dataset
|
eval_log_regression
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/eval/log_regression.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/eval/log_regression.py
|
MIT
|
def save(self, name: str, **kwargs: Any) -> None:
"""
Dump model and checkpointables to a file.
Args:
name (str): name of the file.
kwargs (dict): extra arbitrary data to save.
"""
if not self.save_dir or not self.save_to_disk:
return
data = {}
with FSDP.state_dict_type(self.model, StateDictType.LOCAL_STATE_DICT):
data["model"] = self.model.state_dict()
# data["model"] = self.model.state_dict()
for key, obj in self.checkpointables.items():
data[key] = obj.state_dict()
data.update(kwargs)
basename = f"{name}.{rankstr()}.pth"
save_file = os.path.join(self.save_dir, basename)
assert os.path.basename(save_file) == basename, basename
self.logger.info("Saving checkpoint to {}".format(save_file))
with self.path_manager.open(save_file, "wb") as f:
torch.save(data, f)
self.tag_last_checkpoint(basename)
|
Dump model and checkpointables to a file.
Args:
name (str): name of the file.
kwargs (dict): extra arbitrary data to save.
|
save
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/fsdp/__init__.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/fsdp/__init__.py
|
MIT
|
def get_checkpoint_file(self) -> str:
"""
Returns:
str: The latest checkpoint file in target directory.
"""
save_file = os.path.join(self.save_dir, f"last_checkpoint.{rankstr()}")
try:
with self.path_manager.open(save_file, "r") as f:
last_saved = f.read().strip()
except IOError:
# if file doesn't exist, maybe because it has just been
# deleted by a separate process
return ""
# pyre-fixme[6]: For 2nd param expected `Union[PathLike[str], str]` but got
# `Union[bytes, str]`.
return os.path.join(self.save_dir, last_saved)
|
Returns:
str: The latest checkpoint file in target directory.
|
get_checkpoint_file
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/fsdp/__init__.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/fsdp/__init__.py
|
MIT
|
def tag_last_checkpoint(self, last_filename_basename: str) -> None:
"""
Tag the last checkpoint.
Args:
last_filename_basename (str): the basename of the last filename.
"""
if distributed.is_enabled():
torch.distributed.barrier()
save_file = os.path.join(self.save_dir, f"last_checkpoint.{rankstr()}")
with self.path_manager.open(save_file, "w") as f:
f.write(last_filename_basename) # pyre-ignore
|
Tag the last checkpoint.
Args:
last_filename_basename (str): the basename of the last filename.
|
tag_last_checkpoint
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/fsdp/__init__.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/fsdp/__init__.py
|
MIT
|
def get_attn_bias_and_cat(x_list, branges=None):
"""
this will perform the index select, cat the tensors, and provide the attn_bias from cache
"""
batch_sizes = [b.shape[0] for b in branges] if branges is not None else [x.shape[0] for x in x_list]
all_shapes = tuple((b, x.shape[1]) for b, x in zip(batch_sizes, x_list))
if all_shapes not in attn_bias_cache.keys():
seqlens = []
for b, x in zip(batch_sizes, x_list):
for _ in range(b):
seqlens.append(x.shape[1])
attn_bias = fmha.BlockDiagonalMask.from_seqlens(seqlens)
attn_bias._batch_sizes = batch_sizes
attn_bias_cache[all_shapes] = attn_bias
if branges is not None:
cat_tensors = index_select_cat([x.flatten(1) for x in x_list], branges).view(1, -1, x_list[0].shape[-1])
else:
tensors_bs1 = tuple(x.reshape([1, -1, *x.shape[2:]]) for x in x_list)
cat_tensors = torch.cat(tensors_bs1, dim=1)
return attn_bias_cache[all_shapes], cat_tensors
|
this will perform the index select, cat the tensors, and provide the attn_bias from cache
|
get_attn_bias_and_cat
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/layers/block.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/layers/block.py
|
MIT
|
def forward_nested(self, x_list: List[Tensor]) -> List[Tensor]:
"""
x_list contains a list of tensors to nest together and run
"""
assert isinstance(self.attn, MemEffAttention)
if self.training and self.sample_drop_ratio > 0.0:
def attn_residual_func(x: Tensor, attn_bias=None) -> Tensor:
return self.attn(self.norm1(x), attn_bias=attn_bias)
def ffn_residual_func(x: Tensor, attn_bias=None) -> Tensor:
return self.mlp(self.norm2(x))
x_list = drop_add_residual_stochastic_depth_list(
x_list,
residual_func=attn_residual_func,
sample_drop_ratio=self.sample_drop_ratio,
scaling_vector=self.ls1.gamma if isinstance(self.ls1, LayerScale) else None,
)
x_list = drop_add_residual_stochastic_depth_list(
x_list,
residual_func=ffn_residual_func,
sample_drop_ratio=self.sample_drop_ratio,
scaling_vector=self.ls2.gamma if isinstance(self.ls1, LayerScale) else None,
)
return x_list
else:
def attn_residual_func(x: Tensor, attn_bias=None) -> Tensor:
return self.ls1(self.attn(self.norm1(x), attn_bias=attn_bias))
def ffn_residual_func(x: Tensor, attn_bias=None) -> Tensor:
return self.ls2(self.mlp(self.norm2(x)))
attn_bias, x = get_attn_bias_and_cat(x_list)
x = x + attn_residual_func(x, attn_bias=attn_bias)
x = x + ffn_residual_func(x)
return attn_bias.split(x)
|
x_list contains a list of tensors to nest together and run
|
forward_nested
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/layers/block.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/layers/block.py
|
MIT
|
def synchronize_between_processes(self):
"""
Distributed synchronization of the metric
Warning: does not synchronize the deque!
"""
if not distributed.is_enabled():
return
t = torch.tensor([self.count, self.total], dtype=torch.float64, device="cuda")
torch.distributed.barrier()
torch.distributed.all_reduce(t)
t = t.tolist()
self.count = int(t[0])
self.total = t[1]
|
Distributed synchronization of the metric
Warning: does not synchronize the deque!
|
synchronize_between_processes
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/logging/helpers.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/logging/helpers.py
|
MIT
|
def _configure_logger(
name: Optional[str] = None,
*,
level: int = logging.DEBUG,
output: Optional[str] = None,
):
"""
Configure a logger.
Adapted from Detectron2.
Args:
name: The name of the logger to configure.
level: The logging level to use.
output: A file name or a directory to save log. If None, will not save log file.
If ends with ".txt" or ".log", assumed to be a file name.
Otherwise, logs will be saved to `output/log.txt`.
Returns:
The configured logger.
"""
logger = logging.getLogger(name)
logger.setLevel(level)
logger.propagate = False
# Loosely match Google glog format:
# [IWEF]yyyymmdd hh:mm:ss.uuuuuu threadid file:line] msg
# but use a shorter timestamp and include the logger name:
# [IWEF]yyyymmdd hh:mm:ss logger threadid file:line] msg
fmt_prefix = "%(levelname).1s%(asctime)s %(process)s %(name)s %(filename)s:%(lineno)s] "
fmt_message = "%(message)s"
fmt = fmt_prefix + fmt_message
datefmt = "%Y%m%d %H:%M:%S"
formatter = logging.Formatter(fmt=fmt, datefmt=datefmt)
# stdout logging for main worker only
if distributed.is_main_process():
handler = logging.StreamHandler(stream=sys.stdout)
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger.addHandler(handler)
# file logging for all workers
if output:
if os.path.splitext(output)[-1] in (".txt", ".log"):
filename = output
else:
filename = os.path.join(output, "logs", "log.txt")
if not distributed.is_main_process():
global_rank = distributed.get_global_rank()
filename = filename + ".rank{}".format(global_rank)
os.makedirs(os.path.dirname(filename), exist_ok=True)
handler = logging.StreamHandler(open(filename, "a"))
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
|
Configure a logger.
Adapted from Detectron2.
Args:
name: The name of the logger to configure.
level: The logging level to use.
output: A file name or a directory to save log. If None, will not save log file.
If ends with ".txt" or ".log", assumed to be a file name.
Otherwise, logs will be saved to `output/log.txt`.
Returns:
The configured logger.
|
_configure_logger
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/logging/__init__.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/logging/__init__.py
|
MIT
|
def setup_logging(
output: Optional[str] = None,
*,
name: Optional[str] = None,
level: int = logging.DEBUG,
capture_warnings: bool = True,
) -> None:
"""
Setup logging.
Args:
output: A file name or a directory to save log files. If None, log
files will not be saved. If output ends with ".txt" or ".log", it
is assumed to be a file name.
Otherwise, logs will be saved to `output/log.txt`.
name: The name of the logger to configure, by default the root logger.
level: The logging level to use.
capture_warnings: Whether warnings should be captured as logs.
"""
logging.captureWarnings(capture_warnings)
_configure_logger(name, level=level, output=output)
|
Setup logging.
Args:
output: A file name or a directory to save log files. If None, log
files will not be saved. If output ends with ".txt" or ".log", it
is assumed to be a file name.
Otherwise, logs will be saved to `output/log.txt`.
name: The name of the logger to configure, by default the root logger.
level: The logging level to use.
capture_warnings: Whether warnings should be captured as logs.
|
setup_logging
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/logging/__init__.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/logging/__init__.py
|
MIT
|
def forward(self, student_output_list, teacher_out_softmaxed_centered_list):
"""
Cross-entropy between softmax outputs of the teacher and student networks.
"""
# TODO: Use cross_entropy_distribution here
total_loss = 0
for s in student_output_list:
lsm = F.log_softmax(s / self.student_temp, dim=-1)
for t in teacher_out_softmaxed_centered_list:
loss = torch.sum(t * lsm, dim=-1)
total_loss -= loss.mean()
return total_loss
|
Cross-entropy between softmax outputs of the teacher and student networks.
|
forward
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/loss/dino_clstoken_loss.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/loss/dino_clstoken_loss.py
|
MIT
|
def forward(self, student_patch_tokens, teacher_patch_tokens, student_masks_flat):
"""
Cross-entropy between softmax outputs of the teacher and student networks.
student_patch_tokens: (B, N, D) tensor
teacher_patch_tokens: (B, N, D) tensor
student_masks_flat: (B, N) tensor
"""
t = teacher_patch_tokens
s = student_patch_tokens
loss = torch.sum(t * F.log_softmax(s / self.student_temp, dim=-1), dim=-1)
loss = torch.sum(loss * student_masks_flat.float(), dim=-1) / student_masks_flat.sum(dim=-1).clamp(min=1.0)
return -loss.mean()
|
Cross-entropy between softmax outputs of the teacher and student networks.
student_patch_tokens: (B, N, D) tensor
teacher_patch_tokens: (B, N, D) tensor
student_masks_flat: (B, N) tensor
|
forward
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/loss/ibot_patch_loss.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/loss/ibot_patch_loss.py
|
MIT
|
def pairwise_NNs_inner(self, x):
"""
Pairwise nearest neighbors for L2-normalized vectors.
Uses Torch rather than Faiss to remain on GPU.
"""
# parwise dot products (= inverse distance)
dots = torch.mm(x, x.t())
n = x.shape[0]
dots.view(-1)[:: (n + 1)].fill_(-1) # Trick to fill diagonal with -1
# max inner prod -> min distance
_, I = torch.max(dots, dim=1) # noqa: E741
return I
|
Pairwise nearest neighbors for L2-normalized vectors.
Uses Torch rather than Faiss to remain on GPU.
|
pairwise_NNs_inner
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/loss/koleo_loss.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/loss/koleo_loss.py
|
MIT
|
def forward(self, student_output, eps=1e-8):
"""
Args:
student_output (BxD): backbone output of student
"""
with torch.cuda.amp.autocast(enabled=False):
student_output = F.normalize(student_output, eps=eps, p=2, dim=-1)
I = self.pairwise_NNs_inner(student_output) # noqa: E741
distances = self.pdist(student_output, student_output[I]) # BxD, BxD -> B
loss = -torch.log(distances + eps).mean()
return loss
|
Args:
student_output (BxD): backbone output of student
|
forward
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/loss/koleo_loss.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/loss/koleo_loss.py
|
MIT
|
def __init__(
self,
img_size=224,
patch_size=16,
in_chans=3,
embed_dim=768,
depth=12,
num_heads=12,
mlp_ratio=4.0,
qkv_bias=True,
ffn_bias=True,
proj_bias=True,
drop_path_rate=0.0,
drop_path_uniform=False,
init_values=None, # for layerscale: None or 0 => no layerscale
embed_layer=PatchEmbed,
act_layer=nn.GELU,
block_fn=Block,
ffn_layer="mlp",
block_chunks=1,
):
"""
Args:
img_size (int, tuple): input image size
patch_size (int, tuple): patch size
in_chans (int): number of input channels
embed_dim (int): embedding dimension
depth (int): depth of transformer
num_heads (int): number of attention heads
mlp_ratio (int): ratio of mlp hidden dim to embedding dim
qkv_bias (bool): enable bias for qkv if True
proj_bias (bool): enable bias for proj in attn if True
ffn_bias (bool): enable bias for ffn if True
drop_path_rate (float): stochastic depth rate
drop_path_uniform (bool): apply uniform drop rate across blocks
weight_init (str): weight init scheme
init_values (float): layer-scale init values
embed_layer (nn.Module): patch embedding layer
act_layer (nn.Module): MLP activation layer
block_fn (nn.Module): transformer block class
ffn_layer (str): "mlp", "swiglu", "swiglufused" or "identity"
block_chunks: (int) split block sequence into block_chunks units for FSDP wrap
"""
super().__init__()
norm_layer = partial(nn.LayerNorm, eps=1e-6)
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
self.num_tokens = 1
self.n_blocks = depth
self.num_heads = num_heads
self.patch_size = patch_size
self.patch_embed = embed_layer(img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
num_patches = self.patch_embed.num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))
if drop_path_uniform is True:
dpr = [drop_path_rate] * depth
else:
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
if ffn_layer == "mlp":
logger.info("using MLP layer as FFN")
ffn_layer = Mlp
elif ffn_layer == "swiglufused" or ffn_layer == "swiglu":
logger.info("using SwiGLU layer as FFN")
ffn_layer = SwiGLUFFNFused
elif ffn_layer == "identity":
logger.info("using Identity layer as FFN")
def f(*args, **kwargs):
return nn.Identity()
ffn_layer = f
else:
raise NotImplementedError
blocks_list = [
block_fn(
dim=embed_dim,
num_heads=num_heads,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
proj_bias=proj_bias,
ffn_bias=ffn_bias,
drop_path=dpr[i],
norm_layer=norm_layer,
act_layer=act_layer,
ffn_layer=ffn_layer,
init_values=init_values,
)
for i in range(depth)
]
if block_chunks > 0:
self.chunked_blocks = True
chunked_blocks = []
chunksize = depth // block_chunks
for i in range(0, depth, chunksize):
# this is to keep the block index consistent if we chunk the block list
chunked_blocks.append([nn.Identity()] * i + blocks_list[i : i + chunksize])
self.blocks = nn.ModuleList([BlockChunk(p) for p in chunked_blocks])
else:
self.chunked_blocks = False
self.blocks = nn.ModuleList(blocks_list)
self.norm = norm_layer(embed_dim)
self.head = nn.Identity()
self.mask_token = nn.Parameter(torch.zeros(1, embed_dim))
self.init_weights()
|
Args:
img_size (int, tuple): input image size
patch_size (int, tuple): patch size
in_chans (int): number of input channels
embed_dim (int): embedding dimension
depth (int): depth of transformer
num_heads (int): number of attention heads
mlp_ratio (int): ratio of mlp hidden dim to embedding dim
qkv_bias (bool): enable bias for qkv if True
proj_bias (bool): enable bias for proj in attn if True
ffn_bias (bool): enable bias for ffn if True
drop_path_rate (float): stochastic depth rate
drop_path_uniform (bool): apply uniform drop rate across blocks
weight_init (str): weight init scheme
init_values (float): layer-scale init values
embed_layer (nn.Module): patch embedding layer
act_layer (nn.Module): MLP activation layer
block_fn (nn.Module): transformer block class
ffn_layer (str): "mlp", "swiglu", "swiglufused" or "identity"
block_chunks: (int) split block sequence into block_chunks units for FSDP wrap
|
__init__
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/models/vision_transformer.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/models/vision_transformer.py
|
MIT
|
def init_weights_vit_timm(module: nn.Module, name: str = ""):
"""ViT weight initialization, original timm impl (for reproducibility)"""
if isinstance(module, nn.Linear):
trunc_normal_(module.weight, std=0.02)
if module.bias is not None:
nn.init.zeros_(module.bias)
|
ViT weight initialization, original timm impl (for reproducibility)
|
init_weights_vit_timm
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/models/vision_transformer.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/models/vision_transformer.py
|
MIT
|
def vit_giant2(patch_size=16, **kwargs):
"""
Close to ViT-giant, with embed-dim 1536 and 24 heads => embed-dim per head 64
"""
model = DinoVisionTransformer(
patch_size=patch_size,
embed_dim=1536,
depth=40,
num_heads=24,
mlp_ratio=4,
block_fn=partial(Block, attn_class=MemEffAttention),
**kwargs,
)
return model
|
Close to ViT-giant, with embed-dim 1536 and 24 heads => embed-dim per head 64
|
vit_giant2
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/models/vision_transformer.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/models/vision_transformer.py
|
MIT
|
def setup(args):
"""
Create configs and perform basic setups.
"""
cfg = get_cfg_from_args(args)
os.makedirs(args.output_dir, exist_ok=True)
default_setup(args)
apply_scaling_rules_to_cfg(cfg)
write_config(cfg, args.output_dir)
return cfg
|
Create configs and perform basic setups.
|
setup
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/utils/config.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/utils/config.py
|
MIT
|
def get_vit_lr_decay_rate(name, lr_decay_rate=1.0, num_layers=12, force_is_backbone=False, chunked_blocks=False):
"""
Calculate lr decay rate for different ViT blocks.
Args:
name (string): parameter name.
lr_decay_rate (float): base lr decay rate.
num_layers (int): number of ViT blocks.
Returns:
lr decay rate for the given parameter.
"""
layer_id = num_layers + 1
if name.startswith("backbone") or force_is_backbone:
if ".pos_embed" in name or ".patch_embed" in name or ".mask_token" in name or ".cls_token" in name:
layer_id = 0
elif force_is_backbone and (
"pos_embed" in name or "patch_embed" in name or "mask_token" in name or "cls_token" in name
):
layer_id = 0
elif ".blocks." in name and ".residual." not in name:
layer_id = int(name[name.find(".blocks.") :].split(".")[2]) + 1
elif chunked_blocks and "blocks." in name and "residual." not in name:
layer_id = int(name[name.find("blocks.") :].split(".")[2]) + 1
elif "blocks." in name and "residual." not in name:
layer_id = int(name[name.find("blocks.") :].split(".")[1]) + 1
return lr_decay_rate ** (num_layers + 1 - layer_id)
|
Calculate lr decay rate for different ViT blocks.
Args:
name (string): parameter name.
lr_decay_rate (float): base lr decay rate.
num_layers (int): number of ViT blocks.
Returns:
lr decay rate for the given parameter.
|
get_vit_lr_decay_rate
|
python
|
ali-vilab/AnyDoor
|
dinov2/dinov2/utils/param_groups.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/dinov2/dinov2/utils/param_groups.py
|
MIT
|
def __init__(self, params, lr=1.e-3, betas=(0.9, 0.999), eps=1.e-8, # TODO: check hyperparameters before using
weight_decay=1.e-2, amsgrad=False, ema_decay=0.9999, # ema decay to match previous code
ema_power=1., param_names=()):
"""AdamW that saves EMA versions of the parameters."""
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
if not 0.0 <= weight_decay:
raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
if not 0.0 <= ema_decay <= 1.0:
raise ValueError("Invalid ema_decay value: {}".format(ema_decay))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay, amsgrad=amsgrad, ema_decay=ema_decay,
ema_power=ema_power, param_names=param_names)
super().__init__(params, defaults)
|
AdamW that saves EMA versions of the parameters.
|
__init__
|
python
|
ali-vilab/AnyDoor
|
ldm/util.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/ldm/util.py
|
MIT
|
def step(self, closure=None):
"""Performs a single optimization step.
Args:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
with torch.enable_grad():
loss = closure()
for group in self.param_groups:
params_with_grad = []
grads = []
exp_avgs = []
exp_avg_sqs = []
ema_params_with_grad = []
state_sums = []
max_exp_avg_sqs = []
state_steps = []
amsgrad = group['amsgrad']
beta1, beta2 = group['betas']
ema_decay = group['ema_decay']
ema_power = group['ema_power']
for p in group['params']:
if p.grad is None:
continue
params_with_grad.append(p)
if p.grad.is_sparse:
raise RuntimeError('AdamW does not support sparse gradients')
grads.append(p.grad)
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)
if amsgrad:
# Maintains max of all exp. moving avg. of sq. grad. values
state['max_exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)
# Exponential moving average of parameter values
state['param_exp_avg'] = p.detach().float().clone()
exp_avgs.append(state['exp_avg'])
exp_avg_sqs.append(state['exp_avg_sq'])
ema_params_with_grad.append(state['param_exp_avg'])
if amsgrad:
max_exp_avg_sqs.append(state['max_exp_avg_sq'])
# update the steps for each param group update
state['step'] += 1
# record the step after step update
state_steps.append(state['step'])
optim._functional.adamw(params_with_grad,
grads,
exp_avgs,
exp_avg_sqs,
max_exp_avg_sqs,
state_steps,
amsgrad=amsgrad,
beta1=beta1,
beta2=beta2,
lr=group['lr'],
weight_decay=group['weight_decay'],
eps=group['eps'],
maximize=False)
cur_ema_decay = min(ema_decay, 1 - state['step'] ** -ema_power)
for param, ema_param in zip(params_with_grad, ema_params_with_grad):
ema_param.mul_(cur_ema_decay).add_(param.float(), alpha=1 - cur_ema_decay)
return loss
|
Performs a single optimization step.
Args:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
|
step
|
python
|
ali-vilab/AnyDoor
|
ldm/util.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/ldm/util.py
|
MIT
|
def q_mean_variance(self, x_start, t):
"""
Get the distribution q(x_t | x_0).
:param x_start: the [N x C x ...] tensor of noiseless inputs.
:param t: the number of diffusion steps (minus 1). Here, 0 means one step.
:return: A tuple (mean, variance, log_variance), all of x_start's shape.
"""
mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start)
variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)
log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape)
return mean, variance, log_variance
|
Get the distribution q(x_t | x_0).
:param x_start: the [N x C x ...] tensor of noiseless inputs.
:param t: the number of diffusion steps (minus 1). Here, 0 means one step.
:return: A tuple (mean, variance, log_variance), all of x_start's shape.
|
q_mean_variance
|
python
|
ali-vilab/AnyDoor
|
ldm/models/diffusion/ddpm.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/ddpm.py
|
MIT
|
def delta_border(self, h, w):
"""
:param h: height
:param w: width
:return: normalized distance to image border,
wtith min distance = 0 at border and max dist = 0.5 at image center
"""
lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)
arr = self.meshgrid(h, w) / lower_right_corner
dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]
dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]
edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]
return edge_dist
|
:param h: height
:param w: width
:return: normalized distance to image border,
wtith min distance = 0 at border and max dist = 0.5 at image center
|
delta_border
|
python
|
ali-vilab/AnyDoor
|
ldm/models/diffusion/ddpm.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/ddpm.py
|
MIT
|
def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code
"""
:param x: img of size (bs, c, h, w)
:return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
"""
bs, nc, h, w = x.shape
# number of crops in image
Ly = (h - kernel_size[0]) // stride[0] + 1
Lx = (w - kernel_size[1]) // stride[1] + 1
if uf == 1 and df == 1:
fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
unfold = torch.nn.Unfold(**fold_params)
fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)
weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)
normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap
weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))
elif uf > 1 and df == 1:
fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
unfold = torch.nn.Unfold(**fold_params)
fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),
dilation=1, padding=0,
stride=(stride[0] * uf, stride[1] * uf))
fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)
weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)
normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap
weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))
elif df > 1 and uf == 1:
fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
unfold = torch.nn.Unfold(**fold_params)
fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),
dilation=1, padding=0,
stride=(stride[0] // df, stride[1] // df))
fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)
weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)
normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap
weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))
else:
raise NotImplementedError
return fold, unfold, normalization, weighting
|
:param x: img of size (bs, c, h, w)
:return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
|
get_fold_unfold
|
python
|
ali-vilab/AnyDoor
|
ldm/models/diffusion/ddpm.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/ddpm.py
|
MIT
|
def _prior_bpd(self, x_start):
"""
Get the prior KL term for the variational lower-bound, measured in
bits-per-dim.
This term can't be optimized, as it only depends on the encoder.
:param x_start: the [N x C x ...] tensor of inputs.
:return: a batch of [N] KL values (in bits), one per batch element.
"""
batch_size = x_start.shape[0]
t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)
qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)
kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)
return mean_flat(kl_prior) / np.log(2.0)
|
Get the prior KL term for the variational lower-bound, measured in
bits-per-dim.
This term can't be optimized, as it only depends on the encoder.
:param x_start: the [N x C x ...] tensor of inputs.
:return: a batch of [N] KL values (in bits), one per batch element.
|
_prior_bpd
|
python
|
ali-vilab/AnyDoor
|
ldm/models/diffusion/ddpm.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/ddpm.py
|
MIT
|
def append_dims(x, target_dims):
"""Appends dimensions to the end of a tensor until it has target_dims dimensions.
From https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/utils.py"""
dims_to_append = target_dims - x.ndim
if dims_to_append < 0:
raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less')
return x[(...,) + (None,) * dims_to_append]
|
Appends dimensions to the end of a tensor until it has target_dims dimensions.
From https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/utils.py
|
append_dims
|
python
|
ali-vilab/AnyDoor
|
ldm/models/diffusion/sampling_util.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/sampling_util.py
|
MIT
|
def __init__(
self,
schedule='discrete',
betas=None,
alphas_cumprod=None,
continuous_beta_0=0.1,
continuous_beta_1=20.,
):
"""Create a wrapper class for the forward SDE (VP type).
***
Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t.
We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images.
***
The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ).
We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper).
Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have:
log_alpha_t = self.marginal_log_mean_coeff(t)
sigma_t = self.marginal_std(t)
lambda_t = self.marginal_lambda(t)
Moreover, as lambda(t) is an invertible function, we also support its inverse function:
t = self.inverse_lambda(lambda_t)
===============================================================
We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]).
1. For discrete-time DPMs:
For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by:
t_i = (i + 1) / N
e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1.
We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3.
Args:
betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details)
alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details)
Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`.
**Important**: Please pay special attention for the args for `alphas_cumprod`:
The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that
q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ).
Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have
alpha_{t_n} = \sqrt{\hat{alpha_n}},
and
log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}).
2. For continuous-time DPMs:
We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise
schedule are the default settings in DDPM and improved-DDPM:
Args:
beta_min: A `float` number. The smallest beta for the linear schedule.
beta_max: A `float` number. The largest beta for the linear schedule.
cosine_s: A `float` number. The hyperparameter in the cosine schedule.
cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule.
T: A `float` number. The ending time of the forward process.
===============================================================
Args:
schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs,
'linear' or 'cosine' for continuous-time DPMs.
Returns:
A wrapper object of the forward SDE (VP type).
===============================================================
Example:
# For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1):
>>> ns = NoiseScheduleVP('discrete', betas=betas)
# For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1):
>>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod)
# For continuous-time DPMs (VPSDE), linear schedule:
>>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.)
"""
if schedule not in ['discrete', 'linear', 'cosine']:
raise ValueError(
"Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format(
schedule))
self.schedule = schedule
if schedule == 'discrete':
if betas is not None:
log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0)
else:
assert alphas_cumprod is not None
log_alphas = 0.5 * torch.log(alphas_cumprod)
self.total_N = len(log_alphas)
self.T = 1.
self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1))
self.log_alpha_array = log_alphas.reshape((1, -1,))
else:
self.total_N = 1000
self.beta_0 = continuous_beta_0
self.beta_1 = continuous_beta_1
self.cosine_s = 0.008
self.cosine_beta_max = 999.
self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * (
1. + self.cosine_s) / math.pi - self.cosine_s
self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.))
self.schedule = schedule
if schedule == 'cosine':
# For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T.
# Note that T = 0.9946 may be not the optimal setting. However, we find it works well.
self.T = 0.9946
else:
self.T = 1.
|
Create a wrapper class for the forward SDE (VP type).
***
Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t.
We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images.
***
The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ).
We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper).
Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have:
log_alpha_t = self.marginal_log_mean_coeff(t)
sigma_t = self.marginal_std(t)
lambda_t = self.marginal_lambda(t)
Moreover, as lambda(t) is an invertible function, we also support its inverse function:
t = self.inverse_lambda(lambda_t)
===============================================================
We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]).
1. For discrete-time DPMs:
For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by:
t_i = (i + 1) / N
e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1.
We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3.
Args:
betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details)
alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details)
Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`.
**Important**: Please pay special attention for the args for `alphas_cumprod`:
The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that
q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ).
Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have
alpha_{t_n} = \sqrt{\hat{alpha_n}},
and
log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}).
2. For continuous-time DPMs:
We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise
schedule are the default settings in DDPM and improved-DDPM:
Args:
beta_min: A `float` number. The smallest beta for the linear schedule.
beta_max: A `float` number. The largest beta for the linear schedule.
cosine_s: A `float` number. The hyperparameter in the cosine schedule.
cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule.
T: A `float` number. The ending time of the forward process.
===============================================================
Args:
schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs,
'linear' or 'cosine' for continuous-time DPMs.
Returns:
A wrapper object of the forward SDE (VP type).
===============================================================
Example:
# For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1):
>>> ns = NoiseScheduleVP('discrete', betas=betas)
# For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1):
>>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod)
# For continuous-time DPMs (VPSDE), linear schedule:
>>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.)
|
__init__
|
python
|
ali-vilab/AnyDoor
|
ldm/models/diffusion/dpm_solver/dpm_solver.py
|
https://github.com/ali-vilab/AnyDoor/blob/master/ldm/models/diffusion/dpm_solver/dpm_solver.py
|
MIT
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.