Skip to content

Latest commit

Β 

History

History
71 lines (44 loc) Β· 2.26 KB

File metadata and controls

71 lines (44 loc) Β· 2.26 KB

πŸ€– Perception Pipeline β€” Final Release

Robots4Sustainability | ROS 2 Vision Pipeline for Robotic Object Detection & Pose Estimation
Built on YOLOv8 Β· Open3D Β· Kinova Arm Platform Β· ROS 2 Jazzy


πŸ“‹ Table of Contents


Setup & Configuration

For full prerequisites, environment setup, cloning, virtual environment creation, ROS 2 workspace build, and Zenoh middleware configuration:

πŸ‘‰ Setup & Installation Wiki


Running the Perception Pipeline

Launch the pipeline across three terminals in order.


Terminal 1 β€” Camera

Launch the Kinova Vision camera node:

ros2 launch kinova_vision kinova_vision.launch.py \
    device:=192.168.1.12 \
    depth_registration:=true \
    color_camera_info_url:=file:///home/r4s/r4s-ws/src/ros2_kortex_vision/launch/calibration/default_color_calib_1280x720.ini \
    depth_camera_info_url:=file:///home/r4s/r4s-ws/src/ros2_kortex_vision/launch/calibration/default_depth_calib_480x270.ini

πŸ“– Camera Configuration Wiki

Terminal 2 β€” Action Dispatcher

⚠️ Remember to activate your virtual environment before running.

Launch all perception nodes via the action server dispatcher:

ros2 launch perception action_launch.py lazy_loading:=true

Terminal 3 - Brain Client (Optional)

If the FSM is not running, you can alternatively run the brain client directly or send commands manually:

# Option A β€” Run brain client directly
python brain_client.py car_objects motor_grip 0.0
 
# Option B β€” Send a direct task command
ros2 action send_goal /run_perception_pipeline   my_robot_interfaces/action/RunVision   "{task_name: 'car_objects', object_class: 'motor_grip'}"   --feedback

πŸ“– Action Server & Brain Client Wiki


πŸ“š Further Reading

Full Documentation