Hi @jianrenw!
Thanks for posting this. I wanted to provide additional information about two aspects of your question that might impact you and others in the community.
Timing Matters
As Aaron touched on in his response, a smoothly varying trajectory for a joint is currently achieved by sending a new motion command before the previous motion command has finished. The game controller teleoperation code provides a good example of this. Each joint’s motor controller generates smooth motion that depends on the current state of the joint and the command parameters. As such, the result of playing back a recorded trajectory depends on the timing of the executed commands. One exception would be if each command were allowed to finish prior to executing the next command, but this would result in the robot stopping between each command.
For reference, each motor controller uses a trapezoidal speed motion generator based on this open source code.
Also, as Aaron noted, Hello Robot is working to support smooth multi-joint trajectories using splines. These trajectories will support commands that specify a robot pose to be achieved at a particular time. One of the motivations for this new mode of control is to better execute trajectories produced by motion planners, such as trajectories from MoveIt 2. However, this mode of control is under active development and not ready for use, especially since it requires a firmware update.
The Limits of Wheel Odometry
The mobile base’s motion will vary based on the flooring. Hard, high-friction, flat floors will tend to result in more accurate and repeatable motion of the mobile base. Other conditions, such as carpet, floor padding, slippery surfaces, and seams between tiles, will result in less accurate and repeatable motion of the mobile base. This is significantly different from the lift and telescoping arm, which execute consistent motions given typical environmental variations.
stretch_body only uses the rotations of the wheels for feedback when executing commands. In other words, it uses wheel odometry when moving the mobile base, which only considers how much the wheels have rotated rather than the actual motion of the mobile base.
For your purposes, you might want to restrict your recording and playback of mobile base trajectories to the same hard, high-friction, flat floor.
Overcoming the Limits of Wheel Odometry
A common approach to overcoming the limits of wheel odometry is to alter the mobile base’s motion using additional sensing. For example, Stretch’s demo code uses a scan matcher with the laser range finder to produce odometry estimates. The laser range finder senses walls and other fixed objects in a room. When the robot moves, the scan matcher estimates how the robot has moved relative to these fixed objects. stretch_ros includes a launch file that starts this scan matcher. You can find simple examples that use scan matcher odometry to adjust the motion of the mobile base in the MoveBase class of navigate.py.
Visual inertial odometry is another useful way to estimate how the mobile base has moved within a room. Internally at Hello Robot, we’ve tried out the T265 Intel RealSense Tracking Camera mounted to the top of Stretch’s head and pointed towards the ceiling. At some point, we may create 3D-printed mounts for the T265 and make them available to the community with example code.
In the context of navigation, it’s common to estimate the mobile base’s pose and plan mobile base motions using a map of the environment. This goes beyond odometry and relates to the wide world of simultaneous localization and mapping (SLAM).
Moving the robot with respect to the room can be useful in a number of circumstances, such as navigating from one part of the room to another. However, some tasks depend more directly on features that aren’t fixed to the room, such as the pose of a person or a movable object. In these circumstances, moving the base with respect to task-relevant features can be useful. For example, Stretch’s object handover demo executes actions based on visual estimates of the pose of a nearby person. Similarly, Stretch’s grasp object and clean surface demos use estimates of the pose of an object and a nearby surface. Each of these demos uses a single estimate prior to acting. Visual servoing is a powerful related approach that uses repeated observations of task-relevant features to provide feedback while moving a robot.
I hope you and others find this post helpful.
Best wishes,
Charlie
Charlie Kemp, PhD
co-founder & CTO
Hello Robot Inc.
http://charliekemp.com