This project implements visual servoing for a Kinova Gen3 7-DOF robotic arm in ROS 1 Noetic. The robot uses image-based visual servoing (IBVS) to detect and track a rectangular object using its end-effector camera. The system extracts corner points of the rectangle and adjusts the robot's position dynamically to align with a predefined reference.
- Real-time Rectangle Detection: Uses ORB feature detection and RANSAC filtering for robust corner point detection.
- ROS Integration: Subscribes to camera image topics, processes frames in OpenCV, and publishes detected features.
- Trajectory Control: Uses FollowJointTrajectoryAction to move the Kinova Gen3 arm.
- Visual Servoing Loop: Computes position corrections based on detected rectangle corner deviations.
- Gazebo Simulation Support: Can run in both real hardware and Ignition Gazebo.
+------------------------------------------------------+
| ROS Framework |
+------------------------------------------------------+
| Image Capture | Feature Detection | Control |
| (Camera Topic) | (ORB + RANSAC) | (MoveIt!)|
+------------------------------------------------------+
| Kinova Gen3 Robot Arm |
+------------------------------------------------------+
Ensure you have the following installed:
- ROS 1 Noetic
- MoveIt!
- OpenCV (cv_bridge, image_transport)
- Kinova ROS packages (
kortex_driver,kortex_examples) - Gazebo Fortress/Garden (for simulation)
# Clone your workspace and install dependencies
cd ~/Desktop/robot_ws/src
git clone https://github.com/your-repo/your-project.git
cd ..
rosdep install --from-paths src --ignore-src -r -y
catkin_make
source devel/setup.bashroslaunch kortex_examples gazebo3.launchrosrun kortex_examples visual_servoing.pyrqt_image_view /edge_detected_imagerostopic pub /move_robot std_msgs/Empty {}- Capturing Reference Image: The arm moves to position-1 and captures a reference image.
- Real-time Processing: The camera continuously captures frames and detects the largest rectangle.
- Feature Extraction: ORB detects corner points, filtered using RANSAC for robustness.
- Error Computation: Compares current rectangle corners with the reference and computes an error vector.
- Robot Adjustment: Adjusts the arm’s position iteratively to align with the reference.
- Convergence: The process stops once the alignment error falls below a threshold.
- The robot should accurately align with the rectangle even with slight perturbations.
- Robust corner detection should work even in varying lighting conditions.
- Refine ORB & RANSAC parameters for better robustness.
- Implement deep learning-based corner detection for improved accuracy.
- Extend to 6-DOF pose estimation instead of just 2D alignment.
- Your Name – Surabhi Dwivedi, Deepraj Majumdar
- Institution/Organization – Indian Institute of Technology, Jodhpur
⭐ If you find this project useful, give it a star! ⭐