Hi, I’m trying to set up the docking station for the stretch, and I’m using the github code, which hasn’t been perfected but i got it mostly working. However, I need to incorporate it into my ROS2 framework.
Does anybody know of a codebase that has already done this, and if not, some advice for doing so? It’s basically just a couple python files, and I’m already going to pull in an aruco tag ros2 program that I can link to replace the current python one, but I still need to do the rest.
Hi @tammerhaddad, welcome to the Stretch forum! To my knowledge, I don’t think anyone has ported that code to ROS2. I’d actually recommend keeping (at least initially) the camera driver / Aruco detection code, and starting your port with replacing the robot drivers. The code currently uses Stretch Body, the Python API to the robot, and you’d be looking to replace this with the ROS2 API to the robot. Fortunately, all of the driver specific code is kept in one file: normalized_velocity_control.py. The docking code only sends mobile base and head pan commands:
So in normalized_velocity_control.py, you’re looking to replace every instance of robot.base.set_velocity() with a Twist message published to the /cmd_vel topic.
Similarly, replace robot.head commands with calls to the ROS2 follow joint trajectory action.
Thanks for the warm welcome!
I figured the movement portion would be the most difficult/first step, especially since I was planning on porting it from stretch_body.robot to hello_helpers hello_misc, because I am running the stretch_nav2 navigation driver as part of my launch, and so I run into a conflict if I dont. But publishing to cmd_vel seems much better, so I’ll do that
When you say “keeping … the camera driver/ aruco detection code”, do you mean just putting them in a ros2 node? or co running them as a python file while i work on the drivers?
or literally ignoring them and testing the drivers independently?
When you say “keeping … the camera driver/ aruco detection code”, do you mean just putting them in a ros2 node? or co running them as a python file while i work on the drivers?
or literally ignoring them and testing the drivers independently?
I mean co-running them as a Python file. In the ROS world, Stretch has the Realsense ROS drivers for the D435if head camera, and a detect_aruco_markers ROS2 node for ArUco detection. These are used by many programs in the ROS2 world, but not by Stretch Docking. Stretch Docking uses the Python library for the Realsense, and the Python library for cv2.aruco. So what I’m suggesting is sticking these Python versions while you port the robot drivers to the /cmd_vel topic and ROS2 action servers. Hopefully, this makes your life easier since you’re not porting everything all at once. And you shouldn’t run into any issues as you’re not using the Realsense ROS drivers. Then later, you can port the camera/aruco code to ROS2.