Using LIDAR for path planning to move Stretch around obstacles

Hi,

I was thinking if it’s possible to use the LIDAR sensor in Stretch for path planning, and obstacle avoidance when Stretch moves around objects.

I was looking at some path planing repos like these - GitHub - AtsushiSakai/PythonRobotics: Python sample codes for robotics algorithms.
and was wondering if something like this could be used out of the box for Stretch?

Hi @rishi, it is possible to use the 2D Lidar sensor for path planning and obstacle avoidance. We use it within FUNMAP, which is a ROS package we’ve developed for mapping, path planning, and navigation with obstacle avoidance.

At the Python level, it should be possible to use the library you’ve linked with some effort. The 2D Lidar sensor has a nice Python library for fetching data and it looks like PythonRobotics library has a nice algorithm on forming a grid map from Lidar data.

Let me know if you’d like to see an example or had another approach in mind.

1 Like

Yes! It would be amazing to see an example. Thanks a lot!

Great! I will put one together over the weekend. Expect an update here on Monday.

1 Like

Hi @bshah,

Just wanted to follow up on this.

Hi @rishi, thanks for the ping. I’ve written two demos for your reference and committed them to the following branch: feature/rplidar_tool. These scripts use the rplidar-roboticia library to interface with the 2D Lidar sensor. They are:

  • stretch_rplidar_job.py - Demonstrates the API used to interface with the sensor. Additionally, this script can be used to save Lidar scans into a dataset. This makes it easier to develop algorithms offline.
  • stretch_rplidar_mapping.py - Demonstrates how to generate grid maps from Lidar scans (live or from a saved dataset). This script utilizes PythonRobotics’ lidar_to_grid_map module to generate the grid maps.

At the moment, the mapping script will only accept previously captured Lidar scans. There seems to be a bug with how Matplotlib visualizes the scans captured by the RPLidar library (I can provide more details if interested). I will push an update when resolved.

To make the demos accessible, we plan to ship them with Stretch_body. Until then, you may try them out using the following setup steps:

mkdir ~/repos
cd ~/repos
git clone https://github.com/hello-robot/stretch_body.git --branch feature/rplidar_tool
cd stretch_body/tools/bin
curl https://raw.githubusercontent.com/AtsushiSakai/PythonRobotics/master/Mapping/lidar_to_grid_map/lidar_to_grid_map.py -o lidar_to_grid_map.py

Next, you may run the demos and visualize the grid maps (with a monitor plugged into the robot) using:

python stretch_rplidar_jog.py --read_measurements ~/saved_scans.pickle

[Ctrl-C to close the previous script and save the dataset]

python stretch_rplidar_mapping.py --lidar_file ~/saved_scans.pickle

The grid maps will be visualized in a Matplotlib window, showing the open space as seen by the robot’s Lidar sensor. Generally, the next step in mapping is to use an algorithm called Iterative Closest Point (ICP) to align subsequent scans onto previous ones as the robot moves around. This yields a cloud of points that can be turned into a grid map that builds up as the robot moves. With your map, you can use any common grid map based planning algorithm to plan paths around obstacles in the environment.

Let me know if you have any questions, or come across any issues running these demos.

1 Like