Here is the summary of the approaches we discussed with @keshavshankar regarding available IK solutions and performing a grasp task of an object marked by an aruco marker. We discussed two different scripts from which the grasp can be accomplished.
-
Click to Pre-grasp Action Server (ROS2)
- We have an an MoveToPregrasp action server implementation which allows an user to pass on a scaled or normalized pixel coordinates
(u,v)
from the D435i head camera image frame which moves the end effector to pre-grasp pose (i.e. slightly away pointing at the object). - Note that the robot’s base is only rotated and no translation. Therefore this is expected to be used when the robot is already near the object and it is visible in the D435i head camera.
- Currently this feature implemented only as a ROS2 action server, to use this as a python non-ros2 implementation you will have to manually extract the code from move_to_pregrasp_state.py and the IK control code can be found in stretch_ik_control.py
- We have an an MoveToPregrasp action server implementation which allows an user to pass on a scaled or normalized pixel coordinates
-
Visual servoing demo (python)
-
This code provides an example of using visual servoing to reach and grasp a target, either a cube with an ArUco marker on it.
-
The grasp detector uses the following criteria:
- The distance between the point between the fingertips and the center of the cube is small.
- The gripper’s actuator effort has a sufficiently large magnitude with the correct sign.
- The distance between the fingertips is larger than a minimum value.
- The distance between the fingertips is smaller than a maximum value.
-
To use a different ArUco marker, you can edit the following file:
-
- modifying the demo to grasp a different object with an ArUco marker should be easy. Altering the following grasp parameters should enable the robot to grasp a different object:
The expected implementation of a robust grasp solution is to basically move the robot to a object’s location, use the click to pregrasp to make the end effector align towards the object and then use the visual servoing implementation to grasp the object.