Navigation for Stretch3


I’m currently using Nav2 on Stretch 3 for navigation tasks, but I’m encountering issues where Stretch 3 frequently runs into chairs or tables. I suspect this might be due to the lidar at the base not being able to detect objects above its scanning plane. If this is the case, what would be a better approach for detecting these kinds of obstacles? I have also tried the demo code for obstacle detection and Stretch Nav2.

Thank you!

Hi Allen,

There are two additional Stretch ROS2 navigation packages that you can try out for your application. I suggest trying both approaches to map your space and navigate to see which is more performant.

  1. Fast Unified Navigation, Mapping, and Planning (FUNMAP), found here.
  2. Real-Time Appearance-Based Mapping (RTABMap), found here.

FUNMAP uses the robot’s head camera to create Max Height Images (MHIs) of the environment. The photo below shows an example MHI, where the color of the points corresponds to the height of that point.

Above: example FUNMAP output

RTABMap also uses the robot’s head to map the environment. It differs from FUNMAP in that it saves the color and location of the 3D points, and in that it may save more than one Z point at a given X, Y on the floor.

Above: example RTABMap output

Both packages provide nodes to send navigation goals to the robot’s base based on the respective maps. Good luck with your project!

Hi @hello-lamsey !
Thanks for the reply! And these information are pretty useful. After trying out, I still encounter some issues with both packages.


When I run ros2 launch stretch_funmap there’s nothing showing in rviz which shows in the following image:

Also, when I run ros2 run stretch_core keyboard_teleop --ros-args -p mapping_on:=True, I received the following:

Node keyboard_teleop waiting to connect to /funmap/trigger_head_scan

    I was able to map the environment and save the map, however when I run ros2 launch stretch_rtabmap and try to perform navigation tasks, the robot is often not reponding when I locate a 2D GOAL POST.

Thank you!

Hi @allen,

Thank you for the details about trying both applications. Here are some references and debugging suggestions. Also, could you please share the following to help figure out next steps?

  • While running ros2 launch stretch_funmap
    • Output of ros2 node list (making sure everything is running)
    • First few lines of the output of ros2 topic echo /tf | grep map (seeing if map frame is being published)
  • While running ros2 launch stretch_rtabmap
    • A screenshot of the map generated by RTABMap with obstacle overlay visible (red and cyan points)


It looks like your /tf tree is not connecting to the /map frame. When I launch funmap, this is what it looks like after a few seconds:

funmap may hang if it is not receiving images. Is your d435i head camera streaming? This can be checked using, where you should see something like this in the output:

D435i Stream Settings:
 D435I_COLOR_SIZE=[640, 480]
 D435I_DEPTH_SIZE=[640, 480]

If your camera is not streaming properly, then rebooting the robot (full shutdown and power switch off for a few seconds) should fix this.

Using keyboard teleoperation, I was able to take a scan with funmap, which looked like:


I was able to map a relatively open space and send 2D goal poses using RTABMap as shown below:

Does the space that your robot is navigating have a lot of free space (not red or cyan in the map visualization)? It could be that the path planner isn’t able to find a route from the robot’s current pose to the goal.

Good luck with debugging!

1 Like

Hi @hello-lamsey ,

Thanks for all the information! Surprisingly, everything is working well today. The keyboard connected successfully with stretch_funmap, and I was able to start mapping. I’ve also included screenshots of the following: Output of ros2 node list, First few lines of the output of ros2 topic echo /tf | grep map

Camera check

ros2 node list

ros2 topic echo /tf | grep map


mapping and navigation
I noticed in the GitHub README of stretch_funmap that there are only two lines of code for mapping. Are there any additional steps required specifically for navigation? Does the mapping mode also handle navigation? If there’s a different way to launch navigation in Funmap, is it possible to save my map so I don’t have to remap it every time I need to navigate?
Obstacle detection in navigation
When I use 2D pose goals for automated planning and navigation, my robot still collides with walls or chairs despite having a fully mapped path. I suspect localization might be an issue because, although the robot performs well in simulation, it occasionally deviates from the planned path and collides with objects along the sides. How can I address these issues?


A recording on 2D nav goal not working. It seems like my ground are colored in purple, is there any suggested way to debug this?

Hi @allen,

Here’s some answers to your questions:


Mapping + Navigation

FUNMAP automatically saves maps in the folder ~/stretch_user/debug/merged_maps/. An example of what a saved map rendering (a Max Height Image) looks like is shown below.

Each map has a timestamp in its filename. You can use the following command to load a previously generated map, where xxx... is the timestamp of the map that you want to load. Don’t include the .yaml extension in the command line argument.

ros2 launch stretch_funmap map_yaml:=/home/hello-robot/stretch_user/debug/merged_maps/merged_map_2024xxxxxxxxxx

You can use FUNMAP for navigation directly. You can use the “2D Nav Goal” button in RViz to send navigation targets, and FUNMAP will plan a piecewise linear trajectory to get there. An example is shown below.

Obstacle Detection

Currently, FUNMAP relies on wheel odometry for localization. This can be imprecise and lead to large accumulated error over long paths or on slippery flooring (like carpet).

One way to update the robot’s current pose estimate is to use the “2D Pose Estimate” button in RViz to manually reset the robot’s location.

Another solution for online localization would be to incorporate laser scan matching localization using the robot’s lidar scanner. This package publishes a more accurate transform from /map to /odom in the /tf tree, and may improve performance while navigating. However, this is not currently natively supported inside ROS2 FUNMAP.


I’m not familiar with the coloring scheme used in RTABMap, but there may be some further details in the original RTABMap documentation that describe the issues you’re facing.

1 Like