Seun/nist gear insertion task example#567
Conversation
NIST peg-insert gear assembly environment: custom OSC action term, keypoint squashing rewards, IK gripper reset, and domain randomization. Includes RL-Games PPO config, train/play scripts, and policy runner wrapper.
Isaac Lab 3.0 changed from wxyz to xyzw quaternion ordering. This caused the robot to spawn upside down, leading to IK solver failure, NaN observations, and training crashes. Key fixes: - Robot init rotation: (1,0,0,0) -> (0,0,0,0,1) in robot_configs.py - Grasp rotation offset: wxyz -> xyzw in environment config - Quaternion canonicalization: check w at index 3 (not 0) everywhere - Replace torch_utils (wxyz) with math_utils (xyzw) in OSC action - Wrap all warp arrays with wp.to_torch() for PyTorch compatibility - Update deprecated IsaacLab API calls to _index variants - Increase gpu_collision_stack_size to 4 GB for contact-heavy scenes
- Consolidate 3 observation files into single gear_insertion_observations.py - Replace 4 custom obs functions with Isaac Lab built-ins (root_pos_w, root_quat_w) - Slim NistGearInsertionTask constructor (40 → 17 params) via GraspConfig dataclass - Hardcode reward weights in configclass instead of passing through constructor - Delete bespoke play_rl_games.py; use generic policy_runner.py for evaluation - Genericise RlGamesActionPolicy (remove NIST-specific defaults) - Move RL-Games YAML config to isaaclab_arena_examples/policy/ - Clean up mdp/__init__.py re-exports - Add merge readiness report and NIST vs Lift comparison doc
- Keep success term registered (returns all-False during training) so SuccessRateMetric can query it, matching Lift task pattern - Loosen success_z_fraction from 0.05 to 0.15 (3mm depth threshold) - Add new NIST asset definitions to object_library
Add step-by-step documentation for the NIST gear insertion RL workflow (environment setup, policy training, evaluation) mirroring the existing Franka lift task pages. Include task overview GIFs, register the new workflow in the RL workflows index, and clamp NaN/inf values in force-torque observations to prevent training instabilities.
Greptile SummaryThis PR adds a complete NIST gear insertion RL workflow: a Franka Panda robot inserts a medium gear onto a peg using operational-space torque control and RL Games PPO. It includes task definition, OSC action/observation/reward terms, a Franka mimic embodiment, an RL Games policy wrapper, and documentation.
Confidence Score: 5/5New findings in this revision are non-blocking; several issues flagged in earlier review rounds have been addressed (angular velocity now uses unit quaternions, gripper close width is zero, yaw wrapping now clamps correctly). The changes introduced in this revision are structurally sound: the OSC action, observation, and reward modules follow established Factory/Forge patterns, the geometry utilities correctly subtract env origins, and the RL Games wrapper correctly avoids phantom episode resets. New findings from this review pass are style or advisory concerns only. osc_action.py (bare string to find_bodies, previously flagged), osc_rewards.py (prediction loss weight not reset, previously flagged), task.py (mixed world/env-frame coordinates in task_obs, one-sided yaw randomization) Important Files Changed
Sequence DiagramsequenceDiagram
participant EM as EventManager
participant PGG as place_gear_in_gripper
participant AM as ActionManager
participant OSC as NistGearInsertionOscAction
participant OM as ObservationManager
participant OBS as NistGearInsertionPolicyObservations
participant RM as RewardManager
Note over EM,RM: Episode Reset
EM->>PGG: reset(env_ids)
PGG->>PGG: _run_grasp_ik() [10 DLS IK iters]
PGG->>PGG: _write_gripper_width(hand_grasp_width)
PGG->>PGG: "_set_gripper_width_target(hand_close_width=0)"
AM->>OSC: reset(env_ids)
OSC->>OSC: _reset_action_filter() [randomize EMA]
OSC->>OSC: _reset_command_thresholds() [randomize pos/rot limits + peg noise]
OSC->>OSC: _reset_contact_state() [randomize force threshold]
OSC->>OSC: _reset_initial_smoothed_actions() [seed from current EE pose]
OM->>OBS: reset(env_ids)
OBS->>OBS: store reset-time fingertip quat as _prev_noisy_quat
Note over EM,RM: Step
AM->>OSC: process_actions(policy_actions)
OSC->>OSC: _update_smoothed_force() [EMA wrist force]
OSC->>OSC: _update_smoothed_actions() [EMA filter + clamp]
OSC->>OSC: _compute_bounded_target_position_from_policy_action()
OSC->>OSC: _compute_bounded_target_orientation_from_policy_yaw()
OSC->>OSC: osc.set_command()
OM->>OBS: __call__()
OBS->>OBS: _read_fingertip_pose() [env-frame pos + quat]
OBS->>OBS: _sample_noisy_pose() [pos + rot noise]
OBS->>OBS: _make_reported_quat() [zero z,w for symmetry]
OBS->>OBS: _compute_velocities() [from full unit quats]
OBS->>OBS: _compute_force_action_observations()
OBS->>OBS: _pack_observation() [24-D vector]
RM->>RM: gear_peg_keypoint_squashing [kp_baseline/coarse/fine]
RM->>RM: gear_insertion_geometry_bonus [engagement + success]
RM->>RM: osc_action_magnitude_penalty
RM->>RM: success_prediction_error [gated aux loss]
Reviews (6): Last reviewed commit: "Update isaaclab_arena/assets/object_libr..." | Re-trigger Greptile |
| noisy_quat = math_utils.quat_mul( | ||
| ft_quat, | ||
| math_utils.quat_from_angle_axis(rot_noise_angle, rot_noise_axis), | ||
| ) | ||
| noisy_quat[:, [2, 3]] = 0.0 | ||
| noisy_quat = noisy_quat * self._flip_quats.unsqueeze(-1) | ||
|
|
||
| arm_osc_action = env.action_manager._terms["arm_action"] | ||
| board_pos = wp.to_torch(board.data.root_pos_w) - origins | ||
| board_quat = wp.to_torch(board.data.root_quat_w) | ||
| peg_offset_exp = self._peg_offset.unsqueeze(0).expand(n, 3) | ||
| peg_pos = board_pos + math_utils.quat_apply(board_quat, peg_offset_exp) | ||
| noisy_fixed_pos = peg_pos + arm_osc_action.fixed_pos_noise | ||
|
|
||
| fingertip_pos_rel = noisy_pos - noisy_fixed_pos | ||
|
|
||
| safe_dt = max(dt, 1e-6) | ||
| ee_linvel = (noisy_pos - self._prev_noisy_pos) / safe_dt | ||
| self._prev_noisy_pos[:] = noisy_pos | ||
|
|
||
| rot_diff = math_utils.quat_mul(noisy_quat, math_utils.quat_conjugate(self._prev_noisy_quat)) | ||
| rot_diff = rot_diff * torch.sign(rot_diff[:, 3]).unsqueeze(-1) | ||
| ee_angvel = axis_angle_from_quat(rot_diff) / safe_dt | ||
| ee_angvel[:, 0:2] = 0.0 | ||
| self._prev_noisy_quat[:] = noisy_quat |
There was a problem hiding this comment.
Non-unit quaternion corrupts
ee_angvel computation
noisy_quat[:, [2, 3]] = 0.0 zeroes the z-imaginary and w-real components of the xyzw quaternion, producing [x, y, 0, 0] whose norm is sqrt(x²+y²) ≠ 1 (and is exactly 0 at the identity rotation where x=y=0). This non-unit quaternion is then fed directly into quat_mul / axis_angle_from_quat to estimate ee_angvel, which requires unit quaternions. The angular velocity signal included in the 24-D policy observation will be numerically incorrect — and zero at the identity pose — which can silently degrade training.
The ee_angvel should be computed from the normalized quaternions before the zeroing step. The zeroed representation is only needed for the neural-network input. For example:
# 1. Compute angular velocity from unit quaternions first
rot_diff_full = math_utils.quat_mul(ft_quat, math_utils.quat_conjugate(self._prev_noisy_quat_full))
rot_diff_full = rot_diff_full * torch.sign(rot_diff_full[:, 3]).unsqueeze(-1)
ee_angvel = axis_angle_from_quat(rot_diff_full) / safe_dt
# 2. Then zero z/w for the policy observation representation
noisy_quat_for_obs = noisy_quat.clone()
noisy_quat_for_obs[:, [2, 3]] = 0.0
noisy_quat_for_obs = noisy_quat_for_obs * self._flip_quats.unsqueeze(-1)
self._prev_noisy_quat[:] = ft_quat # track full unit quaternion| flip[torch.rand(n, device=dev) > 0.5] = -1.0 | ||
| self._flip_quats[env_ids] = flip | ||
|
|
||
| self._ensure_body_indices() | ||
| robot: Articulation = self._env.scene[self._robot_name] | ||
| origins = self._env.scene.env_origins | ||
| self._prev_noisy_pos[env_ids] = ( | ||
| wp.to_torch(robot.data.body_pos_w)[env_ids, self._fingertip_idx] - origins[env_ids] | ||
| ) | ||
| self._prev_noisy_quat[env_ids] = wp.to_torch(robot.data.body_quat_w)[env_ids, self._fingertip_idx] | ||
|
|
There was a problem hiding this comment.
First reset mismatches unit vs. zeroed quaternion
In reset(), self._prev_noisy_quat[env_ids] is stored as the raw body quaternion (full unit quaternion). But in __call__, noisy_quat is zeroed at indices [2, 3] and then saved to self._prev_noisy_quat. So the very first step after every episode reset computes rot_diff from a full unit quaternion (prev) vs. a zeroed non-unit quaternion (current), causing a large spurious angular velocity spike at step 0 of each episode. The reset path should store the same representation that __call__ will write, or the angular velocity should be zeroed at the reset step.
| ``gear_insertion_success_bonus`` / ``gear_mesh_insertion_success``), not the gear | ||
| rigid-body root, consistent with common assembly peg-insert benchmarks. | ||
| """ | ||
|
|
||
| def __init__(self, cfg: RewardTermCfg, env: ManagerBasedRLEnv): | ||
| super().__init__(cfg, env) | ||
| self._pred_scale = 0.0 | ||
| self._delay_until_ratio: float = cfg.params.get("delay_until_ratio", 0.25) | ||
| self._gear_cfg: SceneEntityCfg = cfg.params["gear_cfg"] | ||
| self._board_cfg: SceneEntityCfg = cfg.params["board_cfg"] | ||
| hgo = cfg.params.get("held_gear_base_offset", [2.025e-2, 0.0, 0.0]) |
There was a problem hiding this comment.
_pred_scale is never reset — curriculum latches permanently
self._pred_scale starts at 0.0 and is set to 1.0 once true_success.float().mean() >= self._delay_until_ratio. Because it is an instance-level float with no reset path, once activated it stays at 1.0 for the remainder of training — even if the policy later regresses below the threshold. Whether this irreversibility is intentional (one-way curriculum gate) should be confirmed; if performance can degrade after the switch, the penalty will still apply and may prevent recovery.
| "asset_cfg": SceneEntityCfg("robot", joint_names=[arm_joints]), | ||
| "stiffness_distribution_params": (0.75, 1.5), | ||
| "damping_distribution_params": (0.3, 3.0), | ||
| "operation": "scale", | ||
| "distribution": "log_uniform", | ||
| }, | ||
| ) | ||
| cfg.robot_joint_friction = EventTermCfg( | ||
| func=mdp_isaac_lab.randomize_joint_parameters, | ||
| mode="reset", | ||
| params={ | ||
| "asset_cfg": SceneEntityCfg("robot", joint_names=[arm_joints]), |
There was a problem hiding this comment.
joint_names=[arm_joints] — regex string wrapped in a list
arm_joints is a regex string ("panda_joint.*" or "panda_joint[1-7]"), but it is passed as joint_names=[arm_joints] — a list containing one element. Whether SceneEntityCfg interprets each list element as a regex pattern or as a literal name depends on the Isaac Lab version. If treated as a literal name, it will match zero joints and silently produce no-ops for actuator-gain and joint-friction randomization. The same pattern appears at line 264. Verify the expected API; if a single regex is correct, pass it directly:
"asset_cfg": SceneEntityCfg("robot", joint_names=arm_joints),| held_gear_base_offset: list[float] | None = None, | ||
| gear_peg_height: float = 0.02, | ||
| success_z_fraction: float = 0.80, | ||
| xy_threshold: float = 0.0025, | ||
| rl_training: bool = False, | ||
| ) -> torch.Tensor: | ||
| """Terminate when the gear is inserted onto the peg to the required depth. | ||
|
|
||
| Checks that the held gear's base (root + held_gear_base_offset in gear frame) | ||
| is centered on the peg (XY) and lowered past a fraction of the peg height (Z). | ||
| Peg position is fixed_asset pose + gear_base_offset in the fixed asset's local frame. | ||
|
|
||
| When ``rl_training`` is True, always returns False (no early termination) | ||
| but the term stays registered so that ``SuccessRateMetric`` can query it. | ||
| """ | ||
| if rl_training: | ||
| return torch.zeros(env.num_envs, dtype=torch.bool, device=env.device) | ||
| held_object: RigidObject = env.scene[held_object_cfg.name] | ||
| fixed_object: RigidObject = env.scene[fixed_object_cfg.name] | ||
|
|
||
| held_root = wp.to_torch(held_object.data.root_pos_w) - env.scene.env_origins | ||
| held_quat = wp.to_torch(held_object.data.root_quat_w) | ||
| h_offset = held_gear_base_offset if held_gear_base_offset is not None else gear_base_offset | ||
| held_off = torch.tensor(h_offset, device=env.device, dtype=torch.float32).unsqueeze(0).expand(env.num_envs, 3) | ||
| held_base_pos = held_root + math_utils.quat_apply(held_quat, held_off) |
There was a problem hiding this comment.
gear_orientation_exceeded reads an attribute that is never set
env._initial_gear_ee_relative_quat is accessed inside the function body, but no event, task, or reset logic in this PR ever sets this attribute on the env. The hasattr guard makes the function silently return all-False (never triggers), so it is effectively dead code. If this termination is intended for future use, a TODO comment would clarify that; if it should be active now, the attribute needs to be populated — e.g., in the place_gear_in_gripper event.
|
|
||
| name = "gears_and_base" | ||
| tags = ["object"] | ||
| usd_path = str(_REPO_ROOT / "assets" / "gearbase_and_gears_gearbase_root.usd") |
There was a problem hiding this comment.
Asset USD paths reference local files that don't exist in the repo
All new NIST asset classes point to _REPO_ROOT / "assets" / "*.usd" paths that are not committed. The PR description notes these will be updated when assets are uploaded to Nucleus, but loading any of these asset classes will raise a file-not-found error at sim startup. Consider adding a clear NOTE comment on each class to make the stub status explicit.
kellyguo11
left a comment
There was a problem hiding this comment.
PR #567 Review: NIST Gear Insertion Task with RL-Games Training Pipeline
Overall Assessment
This is a substantial and well-structured PR that adds a complete RL workflow for contact-rich gear insertion on the NIST assembly board. The architecture follows existing Arena patterns (lift task), the documentation is thorough, and the commit history shows thoughtful iteration (quaternion migration, refactoring, observation hardening). Good work overall.
That said, there are several issues worth addressing before merge — ranging from correctness concerns to code quality and maintainability.
🔴 Issues (Should Fix)
1. Duplicate copyright headers in train_rl_games.py
Lines 1-5 and 7-8 both have copyright/SPDX headers. Remove the duplicate.
2. gpu_collision_stack_size=2**32 - 1 in env_callbacks.py is 4 GB — this affects ALL assembly environments, not just this task
This is set in the shared assembly_env_cfg_callback. Other tasks that import this callback will now request 4 GB collision stack. Consider:
- Making it configurable per-environment, or
- At minimum, adding a comment explaining why this value is needed (contact-heavy gear mesh scenes), so future maintainers know whether they can tune it down.
3. Mutable default arguments in function signatures
Multiple functions use mutable list defaults (peg_offset: list[float] = [0.0, 0.0, 0.0]). This is a well-known Python anti-pattern. Examples:
gear_insertion_rewards.py:gear_insertion_engagement_bonus,gear_insertion_success_bonus,_check_gear_positiongear_insertion_observations.py:peg_pos_in_env_frame,peg_delta_from_held_gear_base,held_gear_base_pos_in_env_frameterminations.py:gear_mesh_insertion_success,gear_dropped_from_gripper
While these are typically called by the manager framework (which passes explicit args), it's still a latent bug. Use tuple or None with a default factory pattern.
4. NistAssembledBoard vs NistBoardAssembled in object_library.py
Two very similarly named asset classes with different name fields ("nist_assembled_board" vs "nist_board_assembled"). The environment only uses "nist_board_assembled". Is NistAssembledBoard actually needed? If not, remove it to avoid confusion. If both are needed, add docstrings explaining the distinction.
5. Missing __init__.py for observations and rewards subdirectories
isaaclab_arena/tasks/observations/gear_insertion_observations.py and isaaclab_arena/tasks/rewards/gear_insertion_rewards.py are new files in subdirectories. Are there __init__.py files already? The diff doesn't show them being created, but imports in nist_gear_insertion_task.py use direct module paths, so this may be fine if the parent __init__.py already exists. Worth verifying.
6. NistGearInsertionPolicyObservations accesses private action_manager._terms
In gear_insertion_observations.py line ~230: arm_osc_action = env.action_manager._terms["arm_action"]. Accessing _terms (private dict) is fragile. If there's a public API for this (like env.action_manager.get_term("arm_action")), use it. Same pattern appears in gear_insertion_rewards.py (osc_action_magnitude_penalty, osc_action_delta_penalty, wrist_contact_force_penalty, success_prediction_error).
7. force_torque_at_body uses robot.root_view.get_link_incoming_joint_force() — undocumented API
get_link_incoming_joint_force() is a PhysX-level call via root_view. This couples the code to specific Isaac Sim internals. Same pattern in NistGearInsertionPolicyObservations.__call__. Add a comment explaining this dependency.
🟡 Suggestions (Nice to Have)
8. Line length violations
Several lines exceed reasonable width (>120 chars), making diffs harder to review:
gear_insertion_observations.pyline 60 (held_gear_base_pos_in_env_frame)terminations.pyline ~298 (tensor creation ingear_mesh_insertion_success)gear_insertion_rewards.pyline ~143
9. Duplicated geometry checks across rewards, terminations, and observations
_check_gear_position (rewards), gear_mesh_insertion_success (terminations), and the success prediction logic all compute the same peg-vs-held-gear-base geometry independently. Consider extracting a shared utility function to reduce drift risk.
10. grasp_rot_offset default [1.0, 0.0, 0.0, 0.0] in GraspConfig — this is xyzw format with w=0
[1.0, 0.0, 0.0, 0.0] — if this is xyzw convention, then w=0 which is not a valid quaternion. The environment overrides it with [1.0, 0.0, 0.0, 0.0] too. Given the Isaac Lab 3.0 migration to xyzw, this looks like it should be [0.0, 0.0, 0.0, 1.0] (identity in xyzw). The commit message says wxyz→xyzw migration was done, but the default in the dataclass may have been missed. Verify this is intentional — the environment always overrides it, but the default is misleading/incorrect.
11. NistGearSmall and NistGearLarge are registered but never used
These asset classes exist in object_library.py but nothing in this PR references them. If they're for future use, add a brief comment. Otherwise, defer adding them until they're needed.
12. concate_obs (typo?) in RL Games YAML and wrapper
concate_obs_groups appears in the YAML config — is this intentionally concate (not concatenate)? This follows the existing RL Games wrapper naming, but worth noting.
13. RL Games YAML: player.deterministic: False
The default player config has deterministic: False. The RlGamesActionPolicy class defaults to deterministic: True. This inconsistency could cause confusion when someone evaluates using the raw RL Games player vs. the Arena policy runner.
14. Consider adding type hints to _KeypointDistanceComputer.compute
The class works well but return types and param types aren't annotated, which would help maintainability.
15. place_gear_in_gripper iterates gripper_joint_setter_func per-row
The loop for row_idx in range(n): self.gripper_joint_setter_func(...) is called twice (open then close). This is O(n) Python loops over environments. Since the setter just does joint_pos[row_indices, jid] = width / 2.0, this could be vectorized to a single tensor operation.
✅ What's Good
- Clean separation of task/environment/policy following Arena conventions
- Documentation mirrors the existing Franka lift workflow — consistent UX
- Observation hardening (
nan_to_num, force clamping) is a smart addition for contact-rich tasks - Commit history is well-structured: each commit has a clear purpose
- Domain randomization is comprehensive (friction, mass, actuator gains, joint friction, yaw variation)
- The
GraspConfigdataclass is a good refactor — keeps the task constructor focused
Summary
The PR is in good shape architecturally. The main items to address are:
- The
gpu_collision_stack_sizechange scope (affects all assembly envs) - The quaternion default in
GraspConfig(verify xyzw correctness) - Mutable default args (easy fix, use tuples)
- Clean up the duplicate asset class (
NistAssembledBoardvsNistBoardAssembled) - Duplicate copyright header in training script
The rest are quality-of-life improvements that can be addressed in follow-ups.
kellyguo11
left a comment
There was a problem hiding this comment.
PR #567 Code Review: NIST Gear Insertion Task
Summary
Well-structured PR adding a complete RL workflow for contact-rich gear insertion. The architecture follows Arena patterns cleanly. I see the prior review from @kellyguo11 already covered several important items — I'll focus on additional observations and a few overlapping concerns worth reinforcing.
🔴 Issues
1. _REPO_ROOT resolution in object_library.py is fragile
_REPO_ROOT = Path(__file__).resolve().parent.parent.parent computes a repo root from file location. If the package is installed as editable vs. site-packages, or the directory structure changes, this silently breaks. Consider using importlib.resources or a package-level constant.
2. wp.to_torch() wrapping throughout — potential memory/lifecycle issue
Every observation/reward/termination call does wp.to_torch(asset.data.root_pos_w). If Warp returns views into GPU memory, the torch tensors share that buffer. Any intermediate sim step could invalidate them. The code generally uses these in-place within a single step, which is fine, but the pattern of wp.to_torch(board.data.root_pos_w)[:n] followed by arithmetic should be verified not to alias buffers that get overwritten.
3. success_prediction_error reward uses mutable _pred_scale — non-deterministic across resets
self._pred_scale starts at 0.0 and flips to 1.0 once the mean success rate crosses delay_until_ratio. But it never resets back to 0.0. If training performance dips after initially hitting the threshold, the prediction penalty stays active. This is a design choice but should be documented — it creates a one-way ratchet effect on the reward landscape.
4. place_gear_in_gripper IK loop doesn't write root poses — sim state may be stale
The IK loop in events.py writes joint positions to sim (write_joint_position_to_sim_index) but never calls env.scene.write_data_to_sim() or steps the simulation between iterations. The body positions read via robot.data.body_pos_w may not reflect the written joint positions until the next sim step. If this IK converges in practice it's fine, but the convergence relies on the FK being updated between writes, which may only work because of how the articulation view caches data.
5. NistGearInsertionOscAction.process_actions — roll/pitch clipping logic has redundant branches
The code computes desired_roll and desired_pitch from target_quat (which already locks roll/pitch via flip_quat), then clips them against the current values. Since the target always has roll=π and pitch=0, the delta computation and clipping could be simplified. The current approach works but is harder to reason about correctness.
🟡 Suggestions
6. Observation function accesses arm_osc_action._smoothed_actions and .force_smooth — tight coupling
The policy observation function (NistGearInsertionPolicyObservations.__call__) both reads from and writes to the action term (arm_osc_action.force_smooth[:] = ...). An observation function mutating action-term state is a surprising side effect. Consider moving the force smoothing update into the action term's process_actions or a dedicated pre-observation hook.
7. contact_thresholds initialized to 7.5 but range is (5.0, 10.0) — reset-only randomization
In NistGearInsertionOscAction.__init__, contact_thresholds is set to 7.5 (midpoint). The randomization only happens on reset(). For the first episode before any reset is called, all envs use 7.5. This may be intentional but could subtly affect the first few rollouts.
8. _CACHED_OFFSETS global dict in gear_insertion_rewards.py — unbounded growth
The cache key is (tuple(values), device) and the tensors are only grown, never shrunk. In practice, with fixed offsets this is fine, but it's a pattern that could leak memory if offset values vary (e.g., if someone adds per-episode offset randomization). Consider a bounded cache or document the assumption.
9. RL Games YAML uses seq_length: 128 equal to horizon_length: 128
With LSTM and seq_length == horizon_length, each minibatch is a single unbroken sequence. This means no sequence boundary handling within a batch. This works but may limit effective batch diversity. Consider whether seq_length: 64 (half horizon) would give better gradient estimates.
10. .gitattributes adds *.jpg filter=lfs and *.pth filter=lfs globally
This now tracks ALL .jpg and .pth files in the repo via LFS, not just the ones in this PR. If someone later adds a small JPG or checkpoint for testing, it'll be LFS-tracked. Consider scoping to specific directories (e.g., docs/images/*.jpg, models/**/*.pth).
11. events.py API migration (write_root_pose_to_sim → write_root_pose_to_sim_index) touches all environments
The changes from write_root_pose_to_sim to write_root_pose_to_sim_index (and similar for velocity/joints) modify shared event functions used by other environments. This is likely needed for the Isaac Lab 3.0 migration, but it's a significant change bundled into a task-specific PR. If any of the old callers pass different argument patterns, this could break them.
12. Documentation is high quality but could note the isaaclab_tasks.direct.automate dependency
The place_gear_in_gripper event imports factory_control from isaaclab_tasks.direct.automate. This creates a dependency on the Factory task package. Worth noting in the environment setup docs or in a comment.
✅ What's Well Done
- The keypoint squashing reward design with three scales (baseline/coarse/fine) is a solid curriculum-like approach
- EMA smoothing on the action space with per-episode randomized factor is a nice touch for sim-to-real transfer
- The
GraspConfigdataclass cleanly separates embodiment concerns from task logic - Dead-zone thresholds on position/rotation prevent chattering near the target
- Documentation follows the existing Arena workflow pattern consistently
- The
check_gear_insertion_geometryshared utility for the XY+Z check (addressing the duplication concern from the prior review) is the right approach
Verdict
The PR is architecturally sound and follows Arena conventions well. The main concerns are: (1) the observation function mutating action-term state (item 6), (2) the global .gitattributes scope change (item 10), and (3) the shared event function API migration bundled into this PR (item 11). The rest are quality improvements that can be addressed incrementally.
- Fix mutable default args (list -> tuple) in rewards/terminations - Remove duplicate copyright header in train_rl_games.py - Remove unused asset classes (NistAssembledBoard, NistGearSmall, NistGearLarge) - Add gpu_collision_stack_size comment explaining 4 GB requirement - Add docstrings for private API access (_terms, get_link_incoming_joint_force) - Extract shared check_gear_insertion_geometry utility - Fix GraspConfig.grasp_rot_offset default to xyzw identity [0,0,0,1] - Replace manual quat sign-flip with quat_unique - Fix observation reset() consistency (zero z/w, normalize, flip) - Vectorize place_gear_in_gripper gripper loop - Set player.deterministic: True in RL Games YAML - Add cached offset helper for reward tensor allocations - Migrate environment from RLFramework enum to string-based entry point - Add distributed training support to train_rl_games.py Signed-off-by: odoherty <odoherty@nvidia.com>
7e35c6f to
b5d877e
Compare
alexmillane
left a comment
There was a problem hiding this comment.
Thanks for putting this together. The docs look great. I have some comments on some of the code style
| class NistGearInsertionOscActionCfg(OperationalSpaceControllerActionCfg): | ||
| """Config for :class:`NistGearInsertionOscAction`.""" | ||
|
|
||
| class_type: type[ActionTerm] = NistGearInsertionOscAction | ||
|
|
||
| fixed_asset_name: str = "gears_and_base" | ||
| peg_offset: tuple[float, float, float] = (0.0, 0.0, 0.0) | ||
| fixed_pos_noise_levels: tuple[float, float, float] = (0.001, 0.001, 0.001) | ||
| pos_action_bounds: tuple[float, float, float] = (0.05, 0.05, 0.05) | ||
| pos_action_threshold: tuple[float, float, float] = (0.02, 0.02, 0.02) | ||
| rot_action_threshold: tuple[float, float, float] = (0.097, 0.097, 0.097) | ||
| pos_threshold_noise_level: tuple[float, float, float] = (0.25, 0.25, 0.25) | ||
| rot_threshold_noise_level: tuple[float, float, float] = (0.29, 0.29, 0.29) | ||
| ema_factor_range: tuple[float, float] = (0.05, 0.2) | ||
| contact_threshold_range: tuple[float, float] = (5.0, 10.0) | ||
| # Dead zone: zero out small commanded deltas on the task wrench. | ||
| pos_dead_zone: tuple[float, float, float] = (0.0005, 0.0005, 0.0005) # m, ~0.5 mm | ||
| rot_dead_zone: float = 0.001 # rad |
There was a problem hiding this comment.
Suggestion to add comments to these parameters. It's difficult to tell what they do without comments.
There was a problem hiding this comment.
Also, consider including a default CONSTANT value so it's easier to avoid mistakes
| yaw_bolt = math_utils.euler_xyz_from_quat(quat_rel_bolt, wrap_to_2pi=True)[2] | ||
| yaw_bolt = torch.where(yaw_bolt > math.pi / 2, yaw_bolt - 2 * math.pi, yaw_bolt) | ||
| yaw_bolt = torch.where(yaw_bolt < -math.pi, yaw_bolt + 2 * math.pi, yaw_bolt) | ||
| yaw_action = (yaw_bolt + math.radians(180.0)) / math.radians(270.0) * 2.0 - 1.0 |
There was a problem hiding this comment.
This operation, divide by 270 degrees, seems odd. Suggestion to move this to a function with a description of the operation.
| unrot_pi = math_utils.quat_from_euler_xyz( | ||
| torch.full((n,), -math.pi, device=self.device), | ||
| torch.zeros(n, device=self.device), | ||
| torch.zeros(n, device=self.device), | ||
| ) |
There was a problem hiding this comment.
Is this a rotation of -pi around x?
suggestion to rename negative_pi_around_x.
The current name doesn't give much of a hint of what it's doing.
| torch.zeros(n, device=self.device), | ||
| torch.zeros(n, device=self.device), | ||
| ) | ||
| quat_rel_bolt = math_utils.quat_mul(unrot_pi, ee_quat) |
There was a problem hiding this comment.
Suggestion to rename to what this means.
q_xyzw_bolt_from_ee could mean the rotation that rotates vectors in the ee-frame to the bolt frame.
| yaw_bolt = torch.where(yaw_bolt > math.pi / 2, yaw_bolt - 2 * math.pi, yaw_bolt) | ||
| yaw_bolt = torch.where(yaw_bolt < -math.pi, yaw_bolt + 2 * math.pi, yaw_bolt) |
There was a problem hiding this comment.
Not sure I get this. At first glance, it appears like some normalization, however on one side we compare with math.pi / 2 and on the other side with -math.pi.
It's a bit difficult to make sense of this. Suggestion to move it to a function where you describe the inputs and outputs.
There was a problem hiding this comment.
We should move this to the new interop approach where we call lab scripts directly.
There was a problem hiding this comment.
You can refer to https://isaac-sim.github.io/IsaacLab-Arena/main/pages/example_workflows/reinforcement_learning/step_2_policy_training.html#training-command
for how to use lab's train script on Arena-regsitered env
There was a problem hiding this comment.
I checked the current setup: Arena’s documented --external_callback flow exists for rsl_rl/train.py, but Isaac Lab’s rl_games/train.py does not currently expose the same callback hook. Can we add the same --external_callback handling to Isaac Lab’s RL-Games train/play scripts that RSL-RL already has.
Reference - submodules/IsaacLab/scripts/reinforcement_learning/rsl_rl/train.py:69
There was a problem hiding this comment.
I see. Can you break this MR into 2 MRs? One for env&task itself. The other for doc including how to train with your env? We can at least get the 1st MR in soon.
We will work with Lab on rl_games to see how feasible it is, and then upgrading Lab submodule will happen at a later time (early June).
| pos = wp.to_torch(board.data.root_pos_w) - env.scene.env_origins | ||
| quat = wp.to_torch(board.data.root_quat_w) | ||
| offset = torch.tensor(peg_offset, device=env.device, dtype=torch.float32).unsqueeze(0).expand(env.num_envs, 3) | ||
| return pos + math_utils.quat_apply(quat, offset) |
There was a problem hiding this comment.
I'm not sure I understand this function.
Is this calculating the peg position w.r.t. the board?
Shouldn't such a function take in two SceneEntityCfg, look up their positions and calculate the difference.
The function name suggests that this is the peg position in the env frame... Why then do we need the board position?
| def held_gear_base_pos_in_env_frame( | ||
| env: ManagerBasedRLEnv, | ||
| gear_cfg: SceneEntityCfg = SceneEntityCfg("medium_nist_gear"), | ||
| held_gear_base_offset: tuple[float, ...] = (2.025e-2, 0.0, 0.0), | ||
| ) -> torch.Tensor: | ||
| """Position of the held gear's insertion point (root + offset in gear frame) in env frame.""" | ||
| gear: RigidObject = env.scene[gear_cfg.name] | ||
| gear_pos = wp.to_torch(gear.data.root_pos_w) - env.scene.env_origins | ||
| gear_quat = wp.to_torch(gear.data.root_quat_w) | ||
| held_off = ( | ||
| torch.tensor(held_gear_base_offset, device=env.device, dtype=torch.float32).unsqueeze(0).expand(env.num_envs, 3) | ||
| ) | ||
| return gear_pos + math_utils.quat_apply(gear_quat, held_off) |
There was a problem hiding this comment.
This appears to be a duplicate of the function above with the variables renamed.
Suggestion to unify these functions into some common function with general variable names.
| reset_quat = reset_quat * flip.unsqueeze(-1) | ||
| self._prev_noisy_quat[env_ids] = reset_quat | ||
|
|
||
| def __call__( |
There was a problem hiding this comment.
Suggestion to break long functions up like this into smaller functions with informative names.
There was a problem hiding this comment.
+1 right now it resolves params, reads robot state, injects noise, computes
velocities, reads force/action state, masks quaternion, then concatenates everything.
How about
def __call__(...):
params = self._resolve_call_params(...)
ft_state = self._read_fingertip_state(env, params)
noisy_state = self._apply_sensor_noise(env, ft_state, params)
motion_obs = self._compute_motion_observations(env, noisy_state)
force_obs = self._compute_force_observations(env, params)
action_obs = self._compute_action_observations(env)
target_obs = self._compute_target_observations(env, noisy_state, params)
return self._pack_policy_observation(
target_obs=target_obs,
fingertip_quat=self._mask_symmetric_quat(noisy_state.quat),
motion_obs=motion_obs,
force_obs=force_obs,
action_obs=action_obs,
)
|
|
||
| import isaaclab.sim as sim_utils | ||
|
|
||
| _REPO_ROOT = Path(__file__).resolve().parent.parent.parent |
There was a problem hiding this comment.
Assets are stored locally, the Nist assets have not been moved to the nucleus cloud storage.
|
|
||
| name = "gears_and_base" | ||
| tags = ["object"] | ||
| usd_path = str(_REPO_ROOT / "assets" / "gearbase_and_gears_gearbase_root.usd") |
There was a problem hiding this comment.
i have uploaded your NIST folder to S3. It will be under f"{ISAACLAB_NUCLEUS_DIR}/Arena/assets/object_library/NIST/" in 6 hrs
|
|
||
| _FACTORY_ASSET_DIR = f"{ISAACLAB_NUCLEUS_DIR}/Factory" | ||
|
|
||
| FRANKA_MIMIC_OSC_CFG = ArticulationCfg( |
There was a problem hiding this comment.
Can you add it to franka.py, as another embodiment?
|
@seun Doherty Can you also include a few test funs on your env, e.g. obs/reward/event term for task? Feel free to refer our tests folder see if anything can be reused. |
| # EE pose, force feedback, prev actions) instead of the default embodiment obs. | ||
| # The task_obs group adds privileged state for the critic separately. | ||
| embodiment.observation_config.policy.actions = None | ||
| embodiment.observation_config.policy.gripper_pos = None | ||
| embodiment.observation_config.policy.eef_pos = None | ||
| embodiment.observation_config.policy.eef_quat = None | ||
| embodiment.observation_config.policy.joint_pos = None | ||
| embodiment.observation_config.policy.joint_vel = None | ||
| embodiment.observation_config.policy.nist_gear_policy_obs = ObsTerm( | ||
| func=NistGearInsertionPolicyObservations, | ||
| params={ | ||
| "robot_name": "robot", | ||
| "board_name": gears_and_base.name, | ||
| "peg_offset": list(peg_tip_offset), | ||
| "fingertip_body_name": "panda_fingertip_centered", | ||
| "force_body_name": "force_sensor", | ||
| "pos_noise_level": 0.0, | ||
| "rot_noise_level_deg": 0.0, | ||
| "force_noise_level": 0.0, | ||
| }, | ||
| ) | ||
| # OSC torque control uses task-specific penalties (action magnitude, jerk, | ||
| # contact force) instead of the default joint-level regularizers. | ||
| embodiment.reward_config.action_rate = None | ||
| embodiment.reward_config.joint_vel = None |
There was a problem hiding this comment.
All embodiment related overwrites shall be in franka.py with you added new embodiment.
| ) | ||
| scene = Scene(assets=[table, assembled_board, medium_gear, gears_and_base, light]) | ||
|
|
||
| grasp_cfg = GraspConfig( |
There was a problem hiding this comment.
This seems to embodiment dependent, can you move it to franka.py in your embodiment cass?
| from isaaclab_assets.robots.franka import FRANKA_PANDA_HIGH_PD_CFG | ||
|
|
||
|
|
||
| def franka_gripper_joint_setter( |
Signed-off-by: odoherty <odoherty@nvidia.com>
Signed-off-by: odoherty <odoherty@nvidia.com>
| "gripper_joint_setter_func": franka_gripper_joint_setter, | ||
| "end_effector_body_name": "panda_hand", | ||
| "finger_joint_names": "panda_finger_joint[1-2]", | ||
| "grasp_rot_offset": [1.0, 0.0, 0.0, 0.0], | ||
| "grasp_offset": [0.02, 0.0, -0.128], | ||
| "arm_joint_names": "panda_joint[1-7]", | ||
| } |
There was a problem hiding this comment.
Gripper never closes on the gear —
hand_close_width equals hand_grasp_width
get_gear_insertion_grasp_config returns hand_grasp_width=0.03 and hand_close_width=0.03. The place_gear_in_gripper event calls _write_gripper_width sequentially with both values; because they are identical (franka_gripper_joint_setter sets each finger to width / 2.0 = 0.015 m in both passes), the "close" step is a no-op and the gear is never gripped. The GraspCfg dataclass defaults confirm hand_close_width=0.0 (fully closed) for the baseline, and the docstring of _write_gripper_width describes the two passes as "open → then close". hand_close_width should be set to a value smaller than hand_grasp_width, matching the gear's outer diameter to create positive contact force on the gear.
Signed-off-by: odoherty <odoherty@nvidia.com>
| def _wrap_yaw_to_action_range(yaw: torch.Tensor) -> torch.Tensor: | ||
| """Represent wrapped yaw in the policy command interval.""" | ||
| yaw = torch.where(yaw > math.pi / 2, yaw - 2 * math.pi, yaw) | ||
| return torch.where(yaw < -math.pi, yaw + 2 * math.pi, yaw) |
There was a problem hiding this comment.
Yaw wrapping dead zone produces out-of-range reset action
EE yaw values in (π/2, π) are not in the valid action range [-π, π/2] (270° span), but _wrap_yaw_to_action_range fails to handle this gap: the first where maps them to (-3π/2, -π), and the second where immediately maps them back to (π/2, π). _target_yaw_to_action then returns a value > 1, so _smoothed_actions[env_ids, 5] is initialized out of [-1, 1] at reset. With a small EMA factor (0.05–0.20), the smoothed yaw action stays inflated for many steps, causing the OSC to command orientations outside the intended range. Add a final clamp(-1.0, 1.0) in _ee_quat_to_yaw_action so that any EE yaw in the dead zone is clamped to the nearest valid boundary instead of escaping the action space.
| yaw_action = self._ee_quat_to_yaw_action(ee_quat) | ||
|
|
||
| self._smoothed_actions[env_ids, 3:5] = 0.0 | ||
| self._smoothed_actions[env_ids, 5] = yaw_action |
There was a problem hiding this comment.
The return value of
_target_yaw_to_action should be clamped to [-1, 1] to guard against EE yaw values in the dead zone (π/2, π). When the arm resets with a yaw in this gap, _wrap_yaw_to_action_range maps it back onto itself rather than into [-π, π/2], causing _target_yaw_to_action to produce a value > 1 that inflates the EMA-smoothed yaw action for many steps after reset.
| yaw_action = self._ee_quat_to_yaw_action(ee_quat) | |
| self._smoothed_actions[env_ids, 3:5] = 0.0 | |
| self._smoothed_actions[env_ids, 5] = yaw_action | |
| yaw_action = self._ee_quat_to_yaw_action(ee_quat).clamp(-1.0, 1.0) | |
| self._smoothed_actions[env_ids, 3:5] = 0.0 | |
| self._smoothed_actions[env_ids, 5] = yaw_action |
Signed-off-by: odoherty <odoherty@nvidia.com>
| "end_effector_body_name": "panda_hand", | ||
| "finger_joint_names": "panda_finger_joint[1-2]", | ||
| "grasp_rot_offset": [1.0, 0.0, 0.0, 0.0], | ||
| "grasp_offset": [0.02, 0.0, -0.128], |
There was a problem hiding this comment.
Can you add a comment why the offset? Also better to have those numbers as CONSTANT
|
|
||
| @staticmethod | ||
| def add_cli_args(parser: argparse.ArgumentParser) -> None: | ||
| parser.add_argument("--embodiment", type=str, default="franka_nist_gear_osc", help="Robot embodiment") |
There was a problem hiding this comment.
Nit
In your env class name, it has mentioned osc_env.
So if you expect this env only works with embodiment equipped with osc ctrl, can you remove this args.
If you expect this env can work with others, can you rename the env class name?
| teleop_device=teleop_device, | ||
| env_cfg_callback=mdp.assembly_env_cfg_callback, | ||
| rl_framework_entry_point="rl_games_cfg_entry_point", | ||
| rl_policy_cfg="isaaclab_arena_examples.policy:nist_gear_insertion_osc_rl_games.yaml", |
There was a problem hiding this comment.
Can you avoid hardcoding the module path?
Use something similar to rsl_rl_policy like rl_policy_cfg=f"{base_rsl_rl_policy.__name__}:RLPolicyCfg"
| xy_threshold=xy_threshold, | ||
| ) | ||
|
|
||
| task = NistGearInsertionTask( |
There was a problem hiding this comment.
Can you update the task name as NistGearInsertionRLTask?
There was a problem hiding this comment.
Does it need to be NIST-dependent
| ) | ||
|
|
||
|
|
||
| _FRANKA_MIMIC_OSC_USD_PATH = f"{ISAACLAB_NUCLEUS_DIR}/Factory/franka_mimic.usd" |
There was a problem hiding this comment.
Can you move all of _FRANKA_MIMIC_OSC* to franka/nist_gear_insertion/ and only have the emodiment class construction in franka.py
There was a problem hiding this comment.
Can you refactor the files in a cleaner structure, with modular func that are reusable, and readable?
isaaclab_arena/
tasks/
nist_gear_insertion/
__init__.py
task.py # NistGearInsertionTask, GearInsertionGeometryCfg
geometry.py # peg/held gear pose helpers, success geometry
observations.py # generic gear insertion obs terms
rewards.py # generic geometry/keypoint rewards
terminations.py # gear insertion success/drop terms
events.py # gear/grasp reset events, if task-generic
grasp.py # GraspCfg + reusable grasp reset helpers
embodiments/
franka/
franka.py # generic Franka embodiments
nist_gear_insertion/
__init__.py
nist_gear_osc.py # FrankaNistGearInsertionOscEmbodiment
nist_gear_grasp.py # Franka-specific GraspCfg factory/helper, optional
isaaclab_arena_environments/
mdp/
nist_gear_insertion/
__init__.py
osc_action.py # NistGearInsertionOscAction/Cfg
osc_observations.py # 24-D OSC policy observation
osc_rewards.py # OSC action/contact/success-pred rewards
nist_assembled_gearmesh_osc_environment.py
| from isaaclab.managers import ObservationTermCfg | ||
|
|
||
|
|
||
| def _offset_pos_in_env_frame( |
There was a problem hiding this comment.
These are not really observation-specific:
_offset_pos_in_env_frame()
check_gear_insertion_geometry()
held_gear_base_pos_in_env_frame()
peg_pos_in_env_frame()
I’d move them to:
isaaclab_arena/tasks/gear_insertion/geometry.py
| return root_pos + math_utils.quat_apply(root_quat, offset_tensor) | ||
|
|
||
|
|
||
| def peg_pos_in_env_frame( |
There was a problem hiding this comment.
These are not really observation-specific:
_offset_pos_in_env_frame()
check_gear_insertion_geometry()
held_gear_base_pos_in_env_frame()
peg_pos_in_env_frame()
I’d move them to:
isaaclab_arena/tasks/gear_insertion/geometry.py
| return _offset_pos_in_env_frame(env, board, peg_offset) | ||
|
|
||
|
|
||
| def held_gear_base_pos_in_env_frame( |
There was a problem hiding this comment.
These are not really observation-specific:
_offset_pos_in_env_frame()
check_gear_insertion_geometry()
held_gear_base_pos_in_env_frame()
peg_pos_in_env_frame()
I’d move them to:
isaaclab_arena/tasks/gear_insertion/geometry.py
| return quat_unique(quat) | ||
|
|
||
|
|
||
| def check_gear_insertion_geometry( |
There was a problem hiding this comment.
These are not really observation-specific:
_offset_pos_in_env_frame()
check_gear_insertion_geometry()
held_gear_base_pos_in_env_frame()
peg_pos_in_env_frame()
I’d move them to:
isaaclab_arena/tasks/gear_insertion/geometry.py
| return peg_pos - held_base | ||
|
|
||
|
|
||
| class NistGearInsertionPolicyObservations(ManagerTermBase): |
There was a problem hiding this comment.
NistGearInsertionPolicyObservations is not generic task observation logic. It depends on the
OSC action term through:
get_nist_gear_insertion_arm_action(env)
That couples task observations to a specific RL/OSC controller. I’d move it to something
like:
isaaclab_arena_environments/mdp/nist_gear_insertion/osc_observations.py
Keep gear_insertion_observations.py for generic observations like peg position, held gear
base position, and peg delta.
| self._robot_name: str = p.get("robot_name", "robot") | ||
| self._board_name: str = p.get("board_name", "gears_and_base") | ||
| self._peg_offset_values = tuple(p.get("peg_offset", [0.0, 0.0, 0.0])) | ||
| self._peg_offset = torch.tensor(self._peg_offset_values, device=env.device) | ||
| self._fingertip_body: str = p.get("fingertip_body_name", "panda_fingertip_centered") | ||
| self._force_body: str = p.get("force_body_name", "force_sensor") | ||
|
|
||
| self._pos_noise: float = p.get("pos_noise_level", 0.00025) | ||
| self._rot_noise_deg: float = p.get("rot_noise_level_deg", 0.1) | ||
| self._force_noise: float = p.get("force_noise_level", 1.0) |
There was a problem hiding this comment.
@dataclass
class GearInsertionPolicyObsCfg:
robot_name: str
board_name: str
peg_offset: list[float]
fingertip_body_name: str
force_body_name: str
pos_noise_level: float = 0.0
rot_noise_level_deg: float = 0.0
force_noise_level: float = 0.0
| obs = torch.cat( | ||
| [ | ||
| fingertip_pos_rel, | ||
| obs_quat, | ||
| ee_linvel, | ||
| ee_angvel, | ||
| noisy_force, | ||
| force_threshold, | ||
| prev_actions, | ||
| ], | ||
| dim=-1, |
There was a problem hiding this comment.
The concatenation is very dedicated, and readers have to count dimensions manually. Add named
constants or a short layout tuple:
POLICY_OBS_LAYOUT = (
("fingertip_pos_rel", 3),
("fingertip_quat", 4),
("ee_linvel", 3),
("ee_angvel", 3),
("force", 3),
("force_threshold", 1),
("prev_actions", 7),
)
| reset_quat = reset_quat * flip.unsqueeze(-1) | ||
| self._prev_noisy_quat[env_ids] = reset_quat | ||
|
|
||
| def __call__( |
There was a problem hiding this comment.
+1 right now it resolves params, reads robot state, injects noise, computes
velocities, reads force/action state, masks quaternion, then concatenates everything.
How about
def __call__(...):
params = self._resolve_call_params(...)
ft_state = self._read_fingertip_state(env, params)
noisy_state = self._apply_sensor_noise(env, ft_state, params)
motion_obs = self._compute_motion_observations(env, noisy_state)
force_obs = self._compute_force_observations(env, params)
action_obs = self._compute_action_observations(env)
target_obs = self._compute_target_observations(env, noisy_state, params)
return self._pack_policy_observation(
target_obs=target_obs,
fingertip_quat=self._mask_symmetric_quat(noisy_state.quat),
motion_obs=motion_obs,
force_obs=force_obs,
action_obs=action_obs,
)
| @@ -0,0 +1,297 @@ | |||
| # Copyright (c) 2025-2026, The Isaac Lab Arena Project Developers (https://github.com/isaac-sim/IsaacLab-Arena/blob/main/CONTRIBUTORS.md). | |||
There was a problem hiding this comment.
Sorry I cannot understand thru your code.
Pulling Codex for help
• For rewards, I’d suggest two levels of cleanup: split the reward config builder, and split
the reward term implementations.GearInsertionRewardsCfg
Right now the constructor builds everything inline. I’d make the reward groups obvious:
class GearInsertionRewardsCfg:
def init(...):
geometry = GearInsertionRewardGeometry(...)
self._add_keypoint_rewards(geometry)
self._add_geometry_bonus_rewards(geometry)
self._add_action_penalty_rewards()
self._add_success_prediction_reward(geometry)Suggested helpers:
def _make_common_keypoint_params(...)
def _make_geometry_bonus_params(...)
def _add_keypoint_rewards(...)
def _add_geometry_bonus_rewards(...)
def _add_osc_penalty_rewards(...)
def _add_success_prediction_reward(...)That makes the reward intent scan as:
alignment shaping
insertion bonuses
control penalties
success prediction lossinstead of one long constructor.
gear_peg_keypoint_squashing
This term does several things in call():
- validates num_keypoints
- resolves held gear offset
- computes held gear insertion pose
- resolves peg offset
- computes target peg pose
- applies random target noise
- computes keypoint distance
- applies squashing function
I’d break it like:
def call(...):
self._validate_num_keypoints(num_keypoints)
held_base_pos, gear_quat = self._compute_held_base_pose(env, gear_cfg,
held_gear_base_offset)
target_pos, target_quat = self._compute_target_pose(env, board_cfg, peg_offset)
target_pos = self._apply_target_noise(target_pos, peg_offset_xy_noise)
return self._compute_squashed_keypoint_reward(
target_pos, target_quat, held_base_pos, gear_quat, keypoint_scale, squash_a, squash_b
)That would make the reward’s meaning much clearer.
Shared Geometry
gear_insertion_geometry_bonus and success_prediction_error both call
_compute_gear_position_success(), and termination does similar work too. I’d move this into a
shared geometry module:isaaclab_arena/tasks/gear_insertion/geometry.py
with helpers like:
compute_peg_pose(...)
compute_held_gear_base_pose(...)
compute_insertion_success(...)Then reward code becomes much easier to read because it says what it wants, not how to
rebuild frames every time.OSC-Specific Rewards
These rewards are policy/controller-specific:
osc_action_magnitude_penalty
osc_action_delta_penalty
wrist_contact_force_penalty
success_prediction_errorThey depend on:
get_nist_gear_insertion_arm_action(env)
I’d suggest moving them out of generic task rewards into something like:
isaaclab_arena_environments/mdp/nist_gear_insertion/osc_rewards.py
Generic task rewards should be things like keypoint alignment and insertion success. OSC
action smoothness, force threshold, and seventh-action success prediction belong to the OSC
RL environment.success_prediction_error
This one especially deserves helpers because it has stateful behavior:
def call(...):
true_success = self._compute_true_success(...)
self._update_prediction_loss_gate(true_success, delay_until_ratio)
pred = self._read_success_prediction(env)
return self._prediction_error(true_success, pred)Also consider renaming _pred_scale to something clearer like:
_success_prediction_loss_enabled
or
_prediction_loss_weight
because _pred_scale hides the fact that it flips on after enough successes.
| return terminated | ||
|
|
||
|
|
||
| def gear_mesh_insertion_success( |
There was a problem hiding this comment.
Sound to me both of funcs are task-specific enough that they probably should not live in the global tasks/terminations.py. Consider moving them under a NIST/gear-insertion module, e.g. tasks/gear_insertion/terminations.py, especially since the default asset names are medium_nist_gear and gears_and_base.
| from isaaclab.sensors.contact_sensor.contact_sensor import ContactSensor | ||
| from isaaclab.utils.math import combine_frame_transforms | ||
|
|
||
| from isaaclab_arena.tasks.observations.gear_insertion_observations import check_gear_insertion_geometry |
There was a problem hiding this comment.
A termination importing geometry from an observation module is backwards. check_gear_insertion_geometry() is not observation-specific. I’d move it to shared geometry code:
isaaclab_arena/tasks/gear_insertion/geometry.py
with helpers like:
compute_peg_pos(...)
compute_held_gear_base_pos(...)
check_insertion_success(...)
| self._smoothed_actions[env_ids, 3:5] = 0.0 | ||
| self._smoothed_actions[env_ids, 5] = yaw_action | ||
| self._smoothed_actions[env_ids, 6] = -1.0 |
There was a problem hiding this comment.
ACTION_DIM = 7
POS_SLICE = slice(0, 3)
ROLL_IDX = 3
PITCH_IDX = 4
YAW_IDX = 5
SUCCESS_IDX = 6
| target_yaw = _wrap_yaw_to_action_range(target_yaw) | ||
| return _target_yaw_to_action(target_yaw) | ||
|
|
||
| def reset(self, env_ids: Sequence[int] | None = None) -> None: |
There was a problem hiding this comment.
def reset(self, env_ids=None):
env_ids, num_envs = self._normalize_env_ids(env_ids)
if num_envs == 0:
return
super().reset(env_ids)
self._reset_action_filter(env_ids, num_envs)
self._reset_command_thresholds(env_ids, num_envs)
self._reset_contact_state(env_ids, num_envs)
self._reset_initial_smoothed_actions(env_ids)
self._reset_debug_state(env_ids)
| """Return the next end-effector position command in the robot base frame.""" | ||
| peg_pos_b = self._get_peg_pos() + self.fixed_pos_noise | ||
|
|
||
| pos_actions = self._smoothed_actions[:, :3] |
| self._force_smoothing_factor * raw_force + (1.0 - self._force_smoothing_factor) * self.force_smooth | ||
| ) | ||
|
|
||
| def _compute_target_position(self, ee_pos_b: torch.Tensor) -> torch.Tensor: |
There was a problem hiding this comment.
_compute_bounded_target_position_from_policy_action(...)
| clipped_delta = self._clip_delta_with_dead_zone(self.delta_pos, self._pos_thresh, self._pos_dead_zone) | ||
| return ee_pos_b + clipped_delta | ||
|
|
||
| def _compute_target_orientation(self, ee_quat_b: torch.Tensor) -> torch.Tensor: |
There was a problem hiding this comment.
_compute_bounded_target_orientation_from_policy_yaw(...)
|
|
||
| def _compute_target_orientation(self, ee_quat_b: torch.Tensor) -> torch.Tensor: | ||
| """Return the next end-effector orientation command in the robot base frame.""" | ||
| target_yaw = _action_to_target_yaw(self._smoothed_actions[:, 5]) |
There was a problem hiding this comment.
docstring saying why 5 or constant defining 5:= yaw
| self._force_body_idx: int | None = None | ||
| self._force_smoothing_factor = cfg.force_smoothing_factor | ||
|
|
||
| def _get_peg_pos(self, env_ids: torch.Tensor | None = None) -> torch.Tensor: |
There was a problem hiding this comment.
seems reusable -- compute_asset_local_offset_pos(env, fixed_asset_cfg, peg_offset) as a shared helper for obs/rewards/termination?
|
|
||
| def _target_yaw_to_action(yaw: torch.Tensor) -> torch.Tensor: | ||
| """Map commanded yaw back to the normalized policy interval.""" | ||
| return (yaw + math.pi) / math.radians(270.0) * 2.0 - 1.0 |
|
|
||
| self._smoothed_actions = torch.zeros(self.num_envs, 7, device=self.device) | ||
| self.ema_factor = torch.full((self.num_envs, 1), 0.05, device=self.device) | ||
| self._pos_bounds = torch.tensor(cfg.pos_action_bounds, device=self.device) |
There was a problem hiding this comment.
A few names are a little too generic:
_pos_bounds
_pos_thresh
_rot_thresh
Clearer names:
_position_action_bounds
_position_step_limits
_rotation_step_limits
There was a problem hiding this comment.
first off — really appreciate the volume of work you've put into this, it's clear a lot of effort has gone in. 🙏
I spent substantial amounts of time going through your updated one and left detailed comments on basically every file with suggestions around file structure and function-level refactors. The main themes are modularity, reusability, and readability — right now it's a bit hard to follow and reuse pieces in isolation. For the math-heavy parts in particular, I ran it by Codex and included its expert suggestions inline so you have a second technical perspective to pull from.
And we achieve the themes thru, but not limited to
- Object orientated class
- Modular, reusable helper funcs
- Docstring that explains domain-specific math ops
Caveat: I may be missing some of the domain nuance here, so please push back on anything where the suggestion doesn't fit — happy to talk through it.
Could you also refactor out the RL-training-related scripts until we enable rl_game interop first? That matches what we aligned on earlier this week.
Nit -- if you follow any published code/work to implement this RL task, can you include it in your MR? It will help us understand how task is constructed while reading your code.
Thank you, @xinjie Yao, for the detailed review. I know this is a large and somewhat complex change, so I appreciate the time you spent going through it carefully. I have made a number of updates so far: refactoring large functions into smaller helpers, moving task-specific code into more appropriate modules, improving modularity/reusability/readability, and adding comments/docstrings where the logic is not obvious. The task is fairly specific and was originally adapted from the Isaac Lab Forge gear insertion direct RL environment, so part of the complexity comes from converting that logic into a manager-based Arena environment. I will add concise references back to the Forge source patterns and explain the domain-specific math/control logic where needed. I agree with your structural feedback, especially around moving task-specific functions out of generic observation/termination modules and into gear-insertion-specific files. That direction makes the code easier to follow and review. I would like to sync next week with you and Alex after I finish this cleanup pass, since I have been iterating on the structure and want to make sure the final organization aligns with the team’s expectations. |
Signed-off-by: odoherty <odoherty@nvidia.com>
There was a problem hiding this comment.
🤖 Isaac Lab Review Bot
Summary
This PR adds a comprehensive NIST gear insertion RL workflow with Franka Panda OSC control, following the Factory/Forge insertion task architecture. The implementation includes well-structured task definitions, keypoint-squashing rewards, domain randomization, and RL-Games integration for LSTM-based asymmetric actor-critic training.
Findings
🔵 Suggestion: The _FRANKA_MIMIC_OSC_JOINT_POS dict in franka_osc.py (line ~65) only includes 8 joints (7 arm + panda_finger_joint2). Consider adding panda_finger_joint1 for completeness, though the mimic joint structure may handle this automatically.
🔵 Suggestion: In osc_action.py line ~292, the _ensure_force_body_idx method uses get_link_incoming_joint_force() which is noted as a private API. Consider adding a brief comment about Isaac Lab version compatibility if this API changes.
🔵 Suggestion: The NistGearInsertionPolicyObservations class zeros the z/w quaternion components (line ~69 in osc_observations.py) for symmetry reduction. A brief docstring explaining this design choice for gear symmetry would help maintainability.
🔵 Suggestion: In events.py, the _run_grasp_ik method (lines 176-198) implements iterative IK. Consider adding a log warning if max iterations are reached without convergence, to help debugging reset-time issues.
🟢 Positive: Excellent test coverage in test_nist_gear_insertion_task.py covering geometry, grasp config, OSC configs, and environment wiring.
🟢 Positive: Clean separation between generic task rewards (rewards.py) and OSC-specific penalties (osc_rewards.py).
🟢 Positive: The grasp reset event properly follows Factory/Forge patterns with IK-based placement and state flushing.
Verdict
Ship it ✅
Well-architected addition following established Isaac Lab patterns. The code is clean, well-documented, and has comprehensive test coverage. The suggestions above are minor improvements that could be addressed in follow-up PRs.
📝 Update (70ed06e): Added instance_name parameter to MediumNistGear.__init__() in object_library.py to align with parent LibraryObject constructor signature. Good fix for consistency.
| """Resolve the wrist force-sensor body index.""" | ||
| if self._force_body_idx is not None: | ||
| return | ||
| body_ids, body_names = self._asset.find_bodies(self.cfg.force_body_name) |
There was a problem hiding this comment.
find_bodies is called with a bare string instead of a list. Every other find_bodies call in this PR wraps the name in a list (e.g. robot.find_bodies([body_name]) in osc_observations.py, events.py, and terminations.py). Isaac Lab's find_bodies accepts a list[str] of regex patterns; passing a bare string makes Python iterate over each character as a separate pattern, which will match zero bodies and immediately raise the ValueError("Found 0 matches") error below — blocking the environment from starting at all.
| body_ids, body_names = self._asset.find_bodies(self.cfg.force_body_name) | |
| body_ids, body_names = self._asset.find_bodies([self.cfg.force_body_name]) |
NIST Gear Insertion PR UpdateThis update refactors and cleans up the NIST gear insertion implementation for review. Summary
Ready for review @xyao-nv and @alexmillane |
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Summary
What's Included
Core task (
isaaclab_arena/tasks/): task definition, keypoint-squashing rewards, 24-D policy observations with wrist-force feedback, insertion success / gear-drop terminations, domain randomization eventsEnvironment (
isaaclab_arena_environments/): OSC action term (asset-relative, EMA smoothing, dead-zones), Franka mimic OSC robot config, environment definition wiring scene + embodiment + task + RL GamesRL infrastructure (
isaaclab_arena/policy/,scripts/): RL Games action policy wrapper, training script, PPO hyperparameter configDocumentation (
docs/pages/example_workflows/nist_gear_insertion/): 4-page workflow (overview, environment setup, training, evaluation) with GIFsAsset registry (
isaaclab_arena/assets/object_library.py): NIST board, gear, peg, and connector asset definitionsNOTE: Object library paths for added assets will be updated when assets are uploaded to nucleus waiting on @alexmillane