Transform Points from Wrist Camera to Base

To transform points from the head camera frame to the base, we were setting the point to be from camera_depth_frame, and transforming it to base_link using tf:

# create transform listener
tf_listener = tf.TransformListener()
tf_listener.waitForTransform("/camera_depth_frame", "/base_link", rospy.Time(0), rospy.Duration(10.0))

# point in camera coordinates
camera_point = PointStamped()
camera_point.header.frame_id = "camera_depth_frame"
camera_point.header.stamp = rospy.Time(0)

camera_point.point.x = float(x_3d)
camera_point.point.y = float(y_3d)
camera_point.point.z = float(depth)
        
# apply transform
base_point = tf_listener.transformPoint("base_link", camera_point)

Now, we have installed the wrist-mounted camera from the upgrade kit on our Stretch RE2. How can we achieve the same thing? Scrolling through the list of ros topics, camera_depth_frame still exists but we don’t see an equivalent for the wrist camera.

Any help will be much appreciated, thanks!

Hi @arjung, are you using the multi_camera.launch.py launch file? You should see the wrist mounted camera’s topics under the “/gripper_camera” namespace. Here’s what you should see:

$ ros2 topic list
/camera/accel/imu_info
/camera/accel/metadata
/camera/accel/sample
/camera/accel/sample_corrected
/camera/aligned_depth_to_color/camera_info
/camera/aligned_depth_to_color/image_raw
/camera/aligned_depth_to_color/image_raw/compressed
/camera/aligned_depth_to_color/image_raw/compressedDepth
/camera/aligned_depth_to_color/image_raw/theora
/camera/aligned_depth_to_infra1/camera_info
/camera/aligned_depth_to_infra1/image_raw
/camera/aligned_depth_to_infra1/image_raw/compressed
/camera/aligned_depth_to_infra1/image_raw/compressedDepth
/camera/aligned_depth_to_infra1/image_raw/theora
/camera/color/camera_info
/camera/color/image_raw
/camera/color/image_raw/compressed
/camera/color/image_raw/compressedDepth
/camera/color/image_raw/theora
/camera/color/metadata
/camera/depth/camera_info
/camera/depth/color/points
/camera/depth/image_rect_raw
/camera/depth/image_rect_raw/compressed
/camera/depth/image_rect_raw/compressedDepth
/camera/depth/image_rect_raw/theora
/camera/depth/metadata
/camera/extrinsics/depth_to_accel
/camera/extrinsics/depth_to_color
/camera/extrinsics/depth_to_depth
/camera/extrinsics/depth_to_gyro
/camera/extrinsics/depth_to_infra1
/camera/gyro/imu_info
/camera/gyro/metadata
/camera/gyro/sample
/camera/infra1/camera_info
/camera/infra1/image_rect_raw
/camera/infra1/image_rect_raw/compressed
/camera/infra1/image_rect_raw/compressedDepth
/camera/infra1/image_rect_raw/theora
/camera/infra1/metadata
/gripper_camera/aligned_depth_to_color/camera_info
/gripper_camera/aligned_depth_to_color/image_raw
/gripper_camera/aligned_depth_to_color/image_raw/compressed
/gripper_camera/aligned_depth_to_color/image_raw/compressedDepth
/gripper_camera/aligned_depth_to_color/image_raw/theora
/gripper_camera/color/camera_info
/gripper_camera/color/image_rect_raw
/gripper_camera/color/image_rect_raw/compressed
/gripper_camera/color/image_rect_raw/compressedDepth
/gripper_camera/color/image_rect_raw/theora
/gripper_camera/color/metadata
/gripper_camera/depth/camera_info
/gripper_camera/depth/color/points
/gripper_camera/depth/image_rect_raw
/gripper_camera/depth/image_rect_raw/compressed
/gripper_camera/depth/image_rect_raw/compressedDepth
/gripper_camera/depth/image_rect_raw/theora
/gripper_camera/depth/metadata
/gripper_camera/extrinsics/depth_to_color
/gripper_camera/extrinsics/depth_to_depth
/parameter_events
/rosout
/tf_static
1 Like

Hi @bshah, thanks for your prompt response, as always!

We are currently on ROS 1 Noetic, and it seems like we’ll have to upgrade to ROS 2 Humble for this.
Based on the README here, it appears as though the documentation for migrating to ROS 2 is not public yet. Would this be something you could help provide to us?

Furthermore, is this upgrade completely backwards compatible, or is there some chance that our pipeline built upon ROS 1 Noetic will have to be modified?

Thanks once again for your continuous help!

Hi @arjung, you can find the upgrade guide for Ubuntu 22.04 (which contains ROS 2 Humble) here:

Thanks for catching that error in our docs. I’ve corrected the mistake.


The SDKs changed between ROS 1 and ROS 2, e.g. where we previously using the rospy library to connect to ROS via Python, we now use rclpy to connect to ROS via Python. Therefore, it will take some effort to migrate your pipeline to ROS 2. I’m happy to answer any questions you have while changing the code to work with ROS 2.

1 Like

Hi @bshah, after the upgrade to ROS 2, will we still be able to use ROS 1 or will we have to switch everything over to ROS 2?

It’s possible to dual boot the robot with Ubuntu 20.04 (which contains ROS 1 Noetic) and Ubuntu 22.04 (which contains ROS 2 Humble), so you can choose which version to use while the robot boots up. Using both distros at the same time is not possible. The same installation guide has instructions on dual booting.

1 Like

Hi @bshah, thanks very much for your help! I was finally able to create a separate partition for Ubuntu 22.04 with ROS 2 Humble, which allows for dual booting.

It turns out that the code snippet from the first post above (written for ROS 1, which simply transforms a point from the camera coordinate frame to the base coordinate frame) does not work for ROS 2. Do you know of analogous commands which would work for ROS 2? I’ve tried looking at creating listeners in ROS 2 (e.g. here) but haven’t had much success implementing the same functionality.

Here’s the basic code snippet which works for ROS 1:

# initialize new ros node
rospy.init_node('transform_debug', anonymous=True)

# create transform listener
tf_listener = tf.TransformListener()
tf_listener.waitForTransform("/camera_depth_frame", "/base_link", rospy.Time(0), rospy.Duration(10.0))

# point in camera coordinates
camera_point = PointStamped()
camera_point.header.frame_id = "camera_depth_frame"
camera_point.header.stamp = rospy.Time(0)

camera_point.point.x = float(x_3d)
camera_point.point.y = float(y_3d)
camera_point.point.z = float(depth)
        
# apply transform
base_point = tf_listener.transformPoint("base_link", camera_point)

Any help will be much appreciated, thanks in advance!