Outdated transforms compared to RealSense

Hi everyone,

we’re using object recognition on the D435 to determine the position of objects. Now to get coordinates relative to the base we are doing a lookup from the camera to the base_link in the TF tree (ROS2). However, these lookups cause problems since we want to use the timestamp from the image as the reference time (as this is the time we want to compare with). The problem is that often, the last transform available is several seconds (3+) old and thus the lookup is not successful.

What’s the reason for the low publishing rate of transforms? Can we configure the update rate somehow?

Thank you!

Hi @snoato, thank you for your post.

What are you using to publish the robot’s TF’s? Are you using stretch_driver or something else? Thank you for clarifying.

Best,
Shehab

Hi @shehab we are using the stretch_driver, yes.

Thanks! Best,
Daniel

Hey @snoato, using TF lookups to get the object w.r.t. the base_link makes sense and we do this all the time. Here’s some issues we’ve seen cause TFs to be stale:

  1. The DDS transport layer in ROS2 Humble has multicast enabled by default. When Stretch is connected to a network with other robots, there’s a chance another robot is on and your TFs are being sent to its ROS2 daemon. This slows things down significantly. E.g. if you run rqt_tf_tree multiple times, you’ll get different results each time you run it. You can isolate your robot from the network using export ROS_DOMAIN_ID=<your-unique-domain-id>.
  2. A slow TF2 broadcaster in the path of the lookup can slow things down. If rqt_tf_tree reports that all of Stretch Driver’s TFs are broadcasting at 15hz, but the object’s TF broadcaster is slower, the lookup will be throttled to the rate and latency of the weakest link.
  3. If the TF is only broadcasted while the camera sees the object (i.e. the broadcaster doesn’t publish cached frames), closed loop planning that doesn’t keep the object in the camera’s view will fail when the camera ceases to see the object.
1 Like