Creating smooth motion using trajectories

Suppose you wanted to program Stretch to draw a perfect circle. Like this:

You might put Stretch in front of a whiteboard and hand it a marker.

You need some points on a circle, and fortunately, numpy makes this easy to calculate.

def draw_circle(n, diameter_m=0.2):
    t = np.linspace(0, 2*np.pi, n, endpoint=True)
    x = (diameter_m / 2) * np.cos(t) + arm_init
    y = (diameter_m / 2) * np.sin(t) + lift_init
    circle_mat = np.c_[x, y]

Next up, Stretch needs to move it’s arm and lift joints to the points in circle_mat. We’ll do this by importing Stretch Body, creating a robot called r, and using the r.<arm/lift>.move_to() methods.

import stretch_body.robot
r = stretch_body.robot.Robot()

def draw_circle(n, diameter_m=0.2):
    t = np.linspace(0, 2*np.pi, n, endpoint=True)
    x = (diameter_m / 2) * np.cos(t) + arm_init
    y = (diameter_m / 2) * np.sin(t) + lift_init
    circle_mat = np.c_[x, y]
    for pt in circle_mat:

If you ran this with 10 points sampled from the circle (n=10), you would get:


This is not a very good circle. Perhaps we would get a better circle if we sampled more points:

Notice a few things:

  1. The circle does get better as we sample more points!
  2. Unfortunately, so the length of time it takes to draw the same circle (>1min for the last circle!)
  3. The arm and lift joints do not move in unison
  4. The whole motion is jerky. The robot pauses before moving to the next point.

We’ve discovered some of the limitations of “Position mode”, the mode the robot is in when you use the move_to() or move_by() methods, or use position mode in Stretch ROS.

It would be nice if we could draw a nice smooth circle in one quick motion. Turns out, we can do this using “Trajectory mode”. This is a mode that gives us control over the timing for a joint as it follows a series of points (a.k.a. a trajectory). This allows us to move the arm and lift in coordination with one another and without needing to pause at each point.

def draw_circle_trajectory(n, diameter_m=0.2):
    t = np.linspace(0, 2*np.pi, n, endpoint=True)
    x = (diameter_m / 2) * np.cos(t) + arm_init
    y = (diameter_m / 2) * np.sin(t) + lift_init
    circle_mat = np.c_[x, y]
    time_dt = 15 / n
    for i in range(n):
        pt = circle_mat[i]
        pt_t = i * time_dt
        r.arm.trajectory.add(t_s=pt_t, x_m=pt[0])
        r.lift.trajectory.add(t_s=pt_t, x_m=pt[1])
    time.sleep(n * time_dt + 0.5)

Notice how each circle takes 15 seconds to draw, regardless of the number of points sampled. This is done by dividing the total time, 15 seconds, by the number of points sampled. Then, when adding point to the trajectory using r.<arm/lift>.trajectory.add(t_s=<time>, x_m=<goal>), you can use the points index, i, and the time delta, time_dt, to calculate when the robot should achieve that point. A tutorial with more details on Trajectory mode is available here and the API for Trajectory mode is documented here.

Comparing the circles (n=10) produced by Position and Trajectory modes, the difference is stark.

In the next clip, we’re comparing 200 points for Position mode and 50 points for Trajectory mode. The results are comparable.

In summary:

  1. You can get smooth motion out of the robot using Trajectory mode
  2. You can get coordinated motion of the robot using Trajectory mode
  3. The trajectory API is available for every joint on the robot, including the mobile base, which is given SE2 waypoints constituting a Balkcom-Mason trajectory.

The code snippets in this post are part of a Python program called For those interested in using the Trajectory mode in ROS2, here’s the same program converted to a ROS2 node.

Happy circle drawing!


Quick nit: you actually can get smooth motion out of Position mode using “preemption”, which is why the robot moves smoothly during xbox teleop. As you move a joystick or press a button, new position goals are preempting old position goals, and the underlying controller smoothly transitions to achieving the new goal. As long new goals keep coming, the motion is continuous.

However, when we’d like to execute pre-planned motions (e.g. drawing a circle), creating smooth motion via preemption becomes very difficult. If we interrupt the previous goal too early, the robot won’t make it to the point and the fidelity of our motion is reduced. If we interrupt the previous goal too late, the robot is already slowing down as it gets closer to the goal, and we’ll notice that the motion is not quite smooth.

Hi, thanks so much for this tutorial! I found it to be extremely well-written and very easy to follow :slight_smile:

It seems to me that r.arm.trajectory.add(t_s=pt_t, x_m=pt[0]) is the trajectory following version of r.arm.move_to(pt[0]), and that r.lift.trajectory.add(t_s=pt_t, x_m=pt[1]) is the trajectory following version of r.lift.move_to(pt[1]). I was wondering what the trajectory following versions of the following two commands would be:

  • robot.end_of_arm.move_to('wrist_yaw', wrist_yaw)
  • robot.base.rotate_by(rotate_by)

For robot.base.rotate_by(rotate_by), perhaps this would be b.trajectory.add(time=0, x=0.0, y=0.0, theta=0.0), where just the relative theta is passed to theta. Is this correct? I wasn’t quite able to find the relevant command for robot.end_of_arm.move_to('wrist_yaw', wrist_yaw) – is there a place in the documentation that I could refer to?

Thanks so much for your help, it is much appreciated!

Hi @arjung, glad to hear the tutorial was useful!

You’re right about those methods being the “trajectory mode” equivalents of the “position mode” move_to/by methods. The only one you’re missing is the trajectories for the end_of_arm joints:

  • robot.end_of_arm.get_joint('wrist_yaw').trajectory.add(t_s=<?>, x_r=<?>) is the trajectory mode equivalent of robot.end_of_arm.move_to('wrist_yaw', <?>) position mode. To access other joints in the end_of_arm, such as “stretch_gripper”, or “wrist_pitch” and “wrist_roll” if you have a Dex Wrist, you replace “wrist_yaw” with the name of that joint in get_joint('wrist_yaw').

robot.base.trajectory.add(time=<?>, x=<?>, y=<?>, theta=<?>) is indeed what’s used for mobile base trajectories. The x, y, and theta components constitute a SE2 pose, and these poses are relative to where the robot started the trajectory. For example, this unit test gives the base five waypoints:

  1. (0, 0, 0) - The first waypoint is always all zeros
  2. (0.4, 0, 0) - Translates the base forwards 0.4m
  3. (0.4, 0, π) - Rotates the base 180 degrees around
  4. (0, 0, π) - Translates the base forward 0.4m
  5. (0, 0, 0) - Rotates the base 180 degrees around, ending up in the same place where the trajectory started.

Note that two subsequent waypoints in the base trajectory can only ever differ by a single translation or rotation. That’s because the underlying representation is a Balkcom-Mason trajectory, where only a series of translation/rotation movements are allowed. Advanced users might replace the underlying trajectory with a different type to allow simultaneous translation/rotation or to achieve other desired properties.

Also note that base trajectories are currently executed “open loop”, meaning that wheel slip on different types of flooring can cause positional error. In the example above, the robot might not end up in the actual same place where the trajectory started, but instead somewhere near it. The solution to this problem is to use feedback from other sensors (e.g. lidar, IMU) to estimate the robot’s location and correct for the wheel slip, however, we have not implemented in the base’s trajectory mode yet.

1 Like

@bshah Got it, thanks so much for your prompt response! This is super helpful!

No problem! happy to answer any other questions that come up when you start using this trajectory API.