I am using Stretch 3 for a project. The aim is to collect image data through a DSLR that will be mounted on stretch.
Can i automate the navigation using the RTABMAP?
How do i do that?
The first stage would be manually making the robot go around the environment and map the room. Once i get the map, how can i make the robot navigate in there autonomously through a python script?
I would also be needing the robot to stop at particular intervals and capture images. Image capture code would be different. How do i automate the navigate and automate the image capture?
Hi @Yash, good question. I’d recommend starting with Nav2, the 2D version of RTABMAP. We have a tutorial that covers mapping a room and getting the robot to navigate autonomously within it. Link: Demo 1 - Mapping & Navigation - Stretch Docs
I have tried mapping a room using the Nav2. I did get some results.
However, when i tried to run the Nav2 after homing, it did not map. It was showing that map node was not being published.
I then freed the robot from the processes and ran the code again, this time, it was working and the room was being mapped.
However, the after freeing the processes, the xbox controller too got freed. When i tried running stretch_xbox_controller_teleop.py in other terminal while the Nav2 was running, it was showing an error that i need to free the process and try again.
The map gets saved to ~/stretch_user/maps. When you launch navigation.launch.py, you’ll need to tell Nav2 where to find the map. It’s a ROS2 argument: map:=${HELLO_FLEET_PATH}/maps/nav2_demo_map.yaml