Robots4Sustainability | ROS 2 Vision Pipeline for Robotic Object Detection & Pose Estimation
Built on YOLOv8 · Open3D · Kinova Arm Platform · ROS 2 Jazzy
For full prerequisites, environment setup, cloning, virtual environment creation, ROS 2 workspace build, and Zenoh middleware configuration:
Launch the pipeline across three terminals in order.
Terminal 1 — Camera
Launch the Kinova Vision camera node:
ros2 launch kinova_vision kinova_vision.launch.py \
device:=192.168.1.12 \
depth_registration:=true \
color_camera_info_url:=file:///home/r4s/r4s-ws/src/ros2_kortex_vision/launch/calibration/default_color_calib_1280x720.ini \
depth_camera_info_url:=file:///home/r4s/r4s-ws/src/ros2_kortex_vision/launch/calibration/default_depth_calib_480x270.iniTerminal 2 — Action Dispatcher
⚠️ Remember to activate your virtual environment before running.
Launch all perception nodes via the action server dispatcher:
ros2 launch perception action_launch.py lazy_loading:=trueIf the FSM is not running, you can alternatively run the brain client directly or send commands manually:
# Option A — Run brain client directly
python brain_client.py car_objects motor_grip 0.0
# Option B — Send a direct task command
ros2 action send_goal /run_perception_pipeline my_robot_interfaces/action/RunVision "{task_name: 'car_objects', object_class: 'motor_grip'}" --feedback📖 Action Server & Brain Client Wiki