Hello Stretch Community!
I am a masters student with Northwestern’s MS in Robotics program, and this summer, I have been interning with the fantastic Hello Robot crew. I was brought on to help develop for Stretch specifically with use cases for high-needs end-users in mind. One feature in particular I thought might be of interest to the broader Stretch community is automating the base to return to a specific location and orientation using ArUco markers.
This feature will continue to be developed and can be found in the features/aruco_navigation branch. In stretch core, I added a new launch file called aruco_navigation.launch. This launch file will open an RVIZ window to visualize ArUco frames and 2D Nav Goals. I also launch the move_base node, which takes advantage of our recent improvements to the NavStack with the Global Planner and DWA local planner. Make sure you have the most recent updates to the stretch_navigation configuration files.
The aruco_navigation.py script in stretch_ros/hello_helpers provides functions to save a base position relative to an ArUco tag and then return to that exact position.
Once the aruco_navigation.launch file is launched,
roslaunch stretch_core aruco_navigation.launch
simple python scripts can be constructed with the hello_helpers functions for a wide variety of applications.
One great use case is auto-docking. In this example, I used the functions from hello_helpers/aruco_navigation.py to automatically navigate the base into the docking station.
First, the tag for the docking station must be added to the list of known aruco tags in the aruco marker dictionary. I chose “docking,” but you can assign any string in the “name” field.
'aruco_marker_info':
'245':
'length_mm': 88.0
'use_rgb_only': False
'name': 'docking'
'link': None
If you are unsure of the ID number for the tag you are working with, you can echo the /aruco/marker_array topic
rostopic echo /aruco/marker_array
in a new terminal window after launching aruco_navigation.launch and hold the tag so it is visible to the realsense camera to find the the number in the “id” field.
Once we have a recognizable tag, we can create our first entry into the pose dictionary. For this example, I manually positioned my robot half a meter directly in front of the docking station. Then, I used save_pose() to remember this position relative to the docking station. The first argument is a name to identify the pose, and the second is the string that identifies that aruco tag.
import hello_helpers.aruco_navigation as an
aruco = an.ArucoNavigationNode()
aruco.save_pose("ready to dock","docking")
The save pose function will pan the head until the RealSense camera sees the requested tag. Once the head stops moving, a new dictionary entry will be saved to a JSON file in the stretch_core/config directory. The JSON file allows for poses to be used across sessions.
Now, we are ready to navigate to the saved pose. go_to_pose() will look up the previously saved pose in the pose dictionary and pan the head to find the same tag, then transform the pose into the current map frame and send a navigation goal.
import hello_helpers.aruco_navigation as an
aruco = an.ArucoNavigationNode()
success = aruco.go_to_pose("ready to dock")
3x Speed
Once the navigation is complete, the base should be lined up and ready for docking. To finish docking, send a twist straight backwards.
import rospy
import time
from geometry_msgs.msg import Twist
cmd_vel_pub = rospy.Publisher('/stretch/cmd_vel', Twist, queue_size=1)
if success:
twist = Twist()
twist.linear.x = -0.05 #5 cm/s backwards
t = 0.5/0.05 #.5 meters / 0.05 m/s = 10s
start_time = rospy.Time.now()
while rospy.Time.now() < start_time + rospy.Duration.from_sec(t):
cmd_vel_pub.publish(twist)
time.sleep(.25)
3x Speed
I have found these functions to be a great starting point for automating tasks, and I hope I can inspire some cool new features from the community!
Happy tinkering!
Anna Garverick